Collective #581

Inspirational Website of the Week: NT On Nathan Taylor’s portfolio everything is interactive! A true digital pleasure. Our pick this week. Get inspired Magical Rainbow Gradients with CSS Houdini and React Hooks A great tutorial by Josh Comeau about a very Read more

Is There A Future Beyond Writing Great Code?

dreamt up by webguru in Uncategorized | Comments Off on Is There A Future Beyond Writing Great Code?

Is There A Future Beyond Writing Great Code?

Is There A Future Beyond Writing Great Code?

Ronald Mendez



Let’s do a quick exercise. Say you’ve been working professionally as a developer for more than five years. You’ve gained hands-on experience through dozens of projects and kept your skillset sharp by learning about new techniques, tools, and frameworks. You contribute to different libraries, routinely refactor the code you write, and periodically exchange code reviews with your colleagues.

But then someone comes up and asks you that one question you haven’t had the chance to figure out: Where do you see yourself, ten years from now?

You might be worried about the idea that if you continue down the same road, you’ll simply be an older developer who codes a bit better and a bit faster. Some developers are happy with this thought and simply can’t wait to continue down that road. But others might realize that this rollercoaster of lessons and growth you’ve been through is quickly shifting to cruise-control mode.

Once you feel you’re in complete control of your role as a developer, you start feeling the itch to do more. Not more of the same, but more personal growth instead. Maybe something different.

During the past few years of my career, I’ve been looking for answers. I got the chance to work with (and learn from) many successful developers who managed to transition into highly influential roles in which they make the most out of their technical background. Each of them explored a different path in which they were able to make an organic transition, based on a balance between their core skills and their complementary skills.

Where Can We Go From Here?

There are some new paths we can explore, that can force us to grow beyond our comfort zones and at the same time benefit from the skillset we’ve worked so hard to cultivate.

As developers, most of the articles we read, the programming books, and even the advice from our peers, are all tailored to help us to only focus on writing better code. Other than that, we’re not really taught how to work better or, to put it in a more philosophical perspective, how to evolve.

We usually have no clue about what comes after achieving the goals we set out for ourselves when we started our careers or if there’s even something we want to do other than coding eight hours a day, for the rest of our lives. It’s common to even underestimate our possible contribution to the team if we were to be doing something other than writing code in the near future. We’re not sure how we can make a bigger impact, even though our perspective and skills are definitely needed in more influencing positions.

Listen To The Industry

Back in 2008, when I started my career as a frontend developer, there wasn’t a person in the world who hadn’t heard of Mark Zuckerberg, the young programmer that became a millionaire while changing the way people communicate. Millennials began to romanticize the idea of legally getting rich while wearing a hoodie. Suddenly, almost every person from my generation wanted to become a developer.

Now, over a decade later, we’re starting to feel the true impact of this boom of coders. Through this year’s Stack Overflow Survey, we learned that more than two-thirds of respondents have less than ten years of professional coding experience.

We can clearly see that experienced developers with leadership skills are scarce, so now companies have to find creative ways to book their best talent in a way that they can oversee more junior developers and maintain the quality of work. This creates an organic leadership structure within growing teams.

The industry continues to grow at a rapid pace and so are our roles as developers. It has become more common to find directors and managers that started out as programmers, and companies are now opening up more leadership positions that require development backgrounds.

It’s safe to say that, even though programming was considered as the next blue-collar job, the role of the developer is growing into highly influential positions within organizations. But there is no written roadmap or proven formula to guide us through that transition.

What Are Some Of Our Options?

There came a point in my career in which I was asked the dreaded question about the future I envision for myself. I had no answer. In fact, it triggered even more questions that hadn’t crossed my mind.

I was already working as frontend lead so I had been given more and more responsibilities apart from writing code, which made me think of a possible future in which I probably wouldn’t be programming. The possibility of having more impact across different projects was definitely appealing.

So I set out to research what options could be interesting for my future. I looked at the path that some colleagues had taken in which they had successfully transitioned from the role of developers to important positions within the company. Most of the cases consisted of taking small steps and being in the right place at the right time. But overall, they all ended up involving themselves in these three main groups of activities:

  1. Managing teams and projects
    Leading a group of people into greatness sounds exciting, but it’s not easy. As seasoned developers, there are many options of growth that involve either managing a group of fellow developers as a team or managing projects across multi-disciplinary teams. Although it’s a highly rewarding option, it requires stepping away from the keyboard and learning to delegate, which can be very tricky for developers that are used to personally solving all of their problems.

    Moving on to a position in which we are in more control of the process and the team involved around it will most likely lead for the need of sacrificing the control we’re used to having when it comes to code.

  2. Mentoring and developing talent
    How many bosses have fantasized about cloning their top developers? In the real world, this is still not likely to happen, so smart bosses do the next best thing: they set up processes in which the savviest coders can actively pass along their knowledge to their peers.

    We have to keep in mind that even though some developers do this naturally in their day-to-day, it’s always more effective if senior developers are given a more formal role in which they can routinely allocate their time to work on the growth of their teams. This can be done with code reviews, workshops, and individual assessments with some colleagues.

  3. Being in the business of technology
    It’s very common to hear developers complain about how projects were pitched or defined when they were sold to clients. And, in most cases, it’s usually too late to complain.

    In my experience, I found myself happier working on projects in which developers had been involved during the sale. It’s always great to have a logical-minded ally flagging potential technical issues in a room where nobody else had a clue.

    The roles of consultants and technical directors are crucial in large digital projects. The involvement of developers in client workshops and drafting technical documentation at the start of any project can potentially be game-changers for the lifecycle of a project.

Working On A New Set Of Tools

Let’s say we want to continue to grow and want to embark on a future where we want to do more than just writing code. Once we have an idea about where we’re headed, it’s very likely that we may not yet be prepared for the leap. After all, we’ve just been focusing on acquiring skills that make us better developers.

Once we realize we have a lot to learn, we need to start working on the right set of skills. This time it will be different: we won’t be learning new languages, frameworks, or libraries. We’re going to need to stock up on skills that might have not felt important in the past, but are crucial for taking the next steps in these uncertain territories.

Communication

For anybody who has a job at any company, this would be a no-brainer. Communication is known to be the core of collaboration within any type of organization. Unfortunately, programmers have been given a free pass in this area for many years. The need to find logical-minded, hard-working, passionate individuals has allowed us to thrive without the need to really have great communication skills and even be a very socially awkward bunch.

If we have any aspirations to work with different teams and clients, it’s very clear that we will have to work on improving all aspects of our communication. One-on-one meetings, presentations, and important emails will all need to be carefully polished from now on.

Ownership

Having logical mindsets has impacted on the way we organize our work. As developers, we usually have a black-and-white sense of where our work begins and where it ends. This is positive when it allows us to have a clear understanding of the work that needs to be done by us, but sometimes prevents us from pushing our boundaries and working outside of our comfort zones.

The first order of business is to start taking ownership of all aspects of the work we’re involved in. By blurring the line that defines where a developer’s work ends, we’re able to take on new responsibilities and eventually transition into different roles.

Leadership

Wherever we’re headed in our careers, we’re going to need for our teammates to trust us. We’ll need them to know we’re headed in the right direction, even if for a moment it’s not totally clear.

In order to achieve this, we’ll need to be able to prove our knowledge, we’ll need to be confident in our decisions, and we will definitely need to be able to acknowledge our mistakes and quickly learn from them.

This is not a simple task and it’s not something you will be able to check off a list. It’s going to require our dedication for as long as we wish to continue growing outside the development bubble.

Get To Work

Once we’re sure we want to take a leap in our career, we have to start moving in the right direction. The first step would be to explore the options, decide which path you want to pursue, and see how that path aligns with your current role.

Does your company offer a space in which you could be a mentor or a manager? Do you think that there’s a chance of making it happen there or do you think you will need to continue your growth elsewhere? These are just some of the questions you have to ask yourself and will also lead to a conversation with some of your teammates and managers.

Taking a step in a new direction will require putting in the hard work, having an open mind, and being resilient enough to fail and try again, as many times as it takes.

Smashing Editorial
(dm, il)

Source: Smashing Magazine, Is There A Future Beyond Writing Great Code?

Getting Started With An Express And ES6+ JavaScript Stack

dreamt up by webguru in Uncategorized | Comments Off on Getting Started With An Express And ES6+ JavaScript Stack

Getting Started With An Express And ES6+ JavaScript Stack

Getting Started With An Express And ES6+ JavaScript Stack

Jamie Corkhill



This article is the second part in a series, with part one located here, which provided basic and (hopefully) intuitive insight into Node.js, ES6+ JavaScript, Callback Functions, Arrow Functions, APIs, the HTTP Protocol, JSON, MongoDB, and more.

In this article, we’ll build upon the skills we attained in the previous one, learning how to implement and deploy a MongoDB Database for storing user booklist information, build an API with Node.js and the Express Web Application framework to expose that database and perform CRUD Operations upon it, and more. Along the way, we’ll discuss ES6 Object Destructuring, ES6 Object Shorthand, the Async/Await syntax, the Spread Operator, and we’ll take a brief look at CORS, the Same Origin Policy, and more.

In a later article, we’ll refactor our codebase as to separate concerns by utilizing three-layer architecture and achieving Inversion of Control via Dependency Injection, we’ll perform JSON Web Token and Firebase Authentication based security and access control, learn how to securely store passwords, and employ AWS Simple Storage Service to store user avatars with Node.js Buffers and Streams — all the while utilizing PostgreSQL for data persistence. Along the way, we will re-write our codebase from the ground up in TypeScript as to examine Classical OOP concepts (such as Polymorphism, Inheritance, Composition, and so on) and even design patterns like Factories and Adapters.

A Word Of Warning

There is a problem with the majority of articles discussing Node.js out there today. Most of them, not all of them, go no further than depicting how to setup Express Routing, integrate Mongoose, and perhaps utilize JSON Web Token Authentication. The problem is that they don’t talk about architecture, or security best practices, or about clean coding principles, or ACID Compliance, Relational Databases, Fifth Normal Form, the CAP Theorem or Transactions. It’s either assumed that you know about all of that coming in, or that you won’t be building projects large or popular enough to warrant that aforementioned knowledge.

There appear to be a few different types of Node developers — among others, some are new to programming in general, and others come from a long history of enterprise development with C# and the .NET Framework or the Java Spring Framework. The majority of articles cater to the former group.

In this article, I’m going to do exactly what I just stated that too many articles are doing, but in a follow up article, we are going to refactor our codebase entirely, permitting me to explain principles such as Dependency Injection, Three-Layer Architecture (Controller/Service/Repository), Data Mapping and Active Record, design patterns, unit, integration, and mutation testing, SOLID Principles, Unit of Work, coding against interfaces, security best practices like HSTS, CSRF, NoSQL and SQL Injection Prevention, and so on. We will also migrate from MongoDB to PostgreSQL, using the simple query builder Knex instead of an ORM — permitting us to build our own data access infrastructure and to get close up and personal with the Structured Query Language, the different types of relations (One-to-One, Many-to-Many, etc.), and more. This article, then, should appeal to beginners, but the next few should cater to more intermediate developers looking to improve their architecture.

In this one, we are only going to worry about persisting book data. We won’t handle user authentication, password hashing, architecture, or anything complex like that. All of that will come in the next and future articles. For now, and very basically, we’ll just build a method by which to permit a client to communicate with our web server via the HTTP Protocol as to save book information in a database.

Note: I’ve intentionally kept it extremely simple and perhaps not all that practical here because this article, in and of itself, is extremely long, for I have taken the liberty of deviating to discuss supplemental topics. Thus, we will progressively improve the quality and complexity of the API over this series, but again, because I’m considering this as one of your first introductions to Express, I’m intentionally keeping things extremely simple.

  1. ES6 Object Destructuring
  2. ES6 Object Shorthand
  3. ES6 Spread Operator (…)
  4. Coming up…

ES6 Object Destructuring

ES6 Object Destructuring, or Destructuring Assignment Syntax, is a method by which to extract or unpack values from arrays or objects into their own variables. We’ll start with object properties and then discuss array elements.

const person = {
    name: 'Richard P. Feynman',
    occupation: 'Theoretical Physicist' 
};

// Log properties:
console.log('Name:', person.name); 
console.log('Occupation:', person.occupation);

Such an operation is quite primitive, but it can be somewhat of a hassle considering we have to keep referencing person.something everywhere. Suppose there were 10 other places throughout our code where we had to do that — it would get quite arduous quite fast. A method of brevity would be to assign these values to their own variables.

const person = {
    name: 'Richard P. Feynman',
    occupation: 'Theoretical Physicist' 
};

const personName = person.name;
const personOccupation = person.occupation;

// Log properties:
console.log('Name:', personName); 
console.log('Occupation:', personOccupation);

Perhaps this looks reasonable, but what if we had 10 other properties nested on the person object as well? That would be many needless lines just to assign values to variables — at which point we’re in danger because if object properties are mutated, our variables won’t reflect that change (remember, only references to the object are immutable with const assignment, not the object’s properties), so basically, we can no longer keep “state” (and I’m using that word loosely) in sync. Pass by reference vs pass by value might come into play here, but I don’t want to stray too far from the scope of this section.

ES6 Object Destructing basically lets us do this:

const person = {
    name: 'Richard P. Feynman',
    occupation: 'Theoretical Physicist' 
};

// This is new. It’s called Object Destructuring.
const { name, occupation } = person;

// Log properties:
console.log('Name:', name); 
console.log('Occupation:', occupation);

We are not creating a new object/object literal, we are unpacking the name and occupation properties from the original object and putting them into their own variables of the same name. The names we use have to match the property names that we wish to extract.

Again, the syntax const { a, b } = someObject; is specifically saying that we expect some property a and some property b to exist within someObject (i.e, someObject could be { a: 'dataA', b: 'dataB' }, for example) and that we want to place whatever the values are of those keys/properties within const variables of the same name. That’s why the syntax above would provide us with two variables const a = someObject.a and const b = someObject.b .

What that means is that there are two sides to Object Destructuring. The “Template” side and the “Source” side, where the const { a, b } side (the left-hand side) is the template and the someObject side (the right-hand side) is the source side — which makes sense — we are defining a structure or “template” on the left that mirrors the data on “source” side.

Again, just to make this clear, here are a few examples:

// ----- Destructure from Object Variable with const ----- //
const objOne = {
    a: 'dataA', 
    b: 'dataB'
};

// Destructure
const { a, b } = objOne;

console.log(a); // dataA
console.log(b); // dataB

// ----- Destructure from Object Variable with let ----- //
let objTwo = {
    c: 'dataC', 
    d: 'dataD'
};

// Destructure
let { c, d } = objTwo;

console.log(c); // dataC
console.log(d); // dataD

// Destructure from Object Literal with const ----- //
const { e, f } = { e: 'dataE', f: 'dataF' }; // <-- Destructure

console.log(e); // dataE
console.log(f); // dataF

// Destructure from Object Literal with let ----- //
let { g, h } = { g: 'dataG', h: 'dataH' }; // <-- Destructure

console.log(g); // dataG
console.log(h); // dataH

In the case of nested properties, mirror the same structure in your destructing assignment:

const person = {
    name:  'Richard P. Feynman',
    occupation: {
        type:  'Theoretical Physicist',
        location: {
            lat:  1,
            lng:  2
        }
    }
};

// Attempt one:
const { name, occupation } = person;

console.log(name); // Richard P. Feynman
console.log(occupation); // The entire `occupation` object.

// Attempt two:
const { occupation: { type, location } } = person;

console.log(type); // Theoretical Physicist
console.log(location) // The entire `location` object.

// Attempt three:
const { occupation: {  location: { lat, lng } } } = person;

console.log(lat); // 1
console.log(lng); // 2

As you can see, the properties you decide to pull off are optional, and to unpack nested properties, simply mirror the structure of the original object (the source) in the template side of your destructuring syntax. If you attempt to destructure a property that does not exist on the original object, that value will be undefined.

We can additionally destructure a variable without first declaring it — assignment without declaration — using the following syntax:

let name, occupation;

const person = {
    name: 'Richard P. Feynman',
    occupation: 'Theoretical Physicist' 
};

;({ name, occupation } = person);

console.log(name); // Richard P. Feynman
console.log(occupation); // Theoretical Physicist

We precede the expression with a semicolon as to ensure we don’t accidentally create an IIFE (Immediately Invoked Function Expression) with a function on a previous line (if one such function exists), and the parentheses around the assignment statement are required as to stop JavaScript from treating your left-hand (template) side as a block.

A very common use case of destructuring exists within function arguments:

const config = {
    baseUrl: '<baseURL>',
    awsBucket: '<bucket>',
    secret: '<secret-key>' // <- Make this an env var.
};

// Destructures `baseUrl` and `awsBucket` off `config`.
const performOperation = ({ baseUrl, awsBucket }) => {
    fetch(baseUrl).then(() => console.log('Done'));
    console.log(awsBucket); // <bucket>
};

performOperation(config);

As you can see, we could have just used the normal destructuring syntax we are now used to inside of the function, like this:

const config = {
    baseUrl: '<baseURL>',
    awsBucket: '<bucket>',
    secret: '<secret-key>' // <- Make this an env var.
};

const performOperation = someConfig => {
    const { baseUrl, awsBucket } = someConfig;
    fetch(baseUrl).then(() => console.log('Done'));
    console.log(awsBucket); // <bucket>
};

performOperation(config);

But placing said syntax inside the function signature performs destructuring automatically and saves us a line.

A real-world use case of this is in React Functional Components for props:

import React from 'react';

// Destructure `titleText` and `secondaryText` from `props`.
export default ({ titleText, secondaryText }) => (
    <div>
        <h1>{titleText}</h1>
        <h3>{secondaryText}</h3>
    </div>
);

As opposed to:

import React from 'react';

export default props => (
    <div>
        <h1>{props.titleText}</h1>
        <h3>{props.secondaryText}</h3>
    </div>
);

In both cases, we can set default values to the properties as well:

const personOne = {
    name:  'User One',
    password:  'BCrypt Hash'
};

const personTwo = {
    password:  'BCrypt Hash'
};

const createUser = ({ name = 'Anonymous', password }) => {
    if (!password) throw  new  Error('InvalidArgumentException');
    
    console.log(name);
    console.log(password);
    
    return {
        id: Math.random().toString(36) // <--- Should follow RFC 4122 Spec in real app.
                .substring(2, 15) + Math.random()
                .toString(36).substring(2, 15),
        name: name,        // <-- We’ll discuss this next.
        password: password // <-- We’ll discuss this next.
    };
}

createUser(personOne); // User One, BCrypt Hash
createUser(personTwo); // Anonymous, BCrypt Hash

As you can see, in the event that name is not present when destructured, we provide it a default value. We can do this with the previous syntax as well:

const { a, b, c = 'Default' } = { a: 'dataA', b: 'dataB' };
console.log(a); // dataA
console.log(b); // dataB
console.log(c); // Default

Arrays can be destructured too:

const myArr = [4, 3];

// Destructuring happens here.
const [valOne, valTwo] = myArr;

console.log(valOne); // 4
console.log(valTwo); // 3

// ----- Destructuring without assignment: ----- //
let a, b;

// Destructuring happens here.
;([a, b] = [10, 2]);

console.log(a + b); // 12

A practical reason for array destructuring occurs with React Hooks. (And there are many other reasons, I’m just using React as an example).

import React, { useState } from "react";

export default () => {
  const [buttonText, setButtonText] = useState("Default");

  return (
    <button onClick={() => setButtonText("Toggled")}>
      {buttonText}
    </button>
  );
}

Notice useState is being destructured off the export, and the array functions/values are being destructured off the useState hook. Again, don’t worry if the above doesn’t make sense — you’d have to understand React — and I’m merely using it as an example.

While there is more to ES6 Object Destructuring, I’ll cover one more topic here: Destructuring Renaming, which is useful to prevent scope collisions or variable shadows, etc. Suppose we want to destructure a property called name from an object called person, but there is already a variable by the name of name in scope. We can rename on the fly with a colon:

// JS Destructuring Naming Collision Example:
const name = 'Jamie Corkhill';

const person = {
    name: 'Alan Touring'
};

// Rename `name` from `person` to `personName` after destructuring.
const { name: personName } = person;

console.log(name); // Jamie Corkhill <-- As expected.
console.log(personName); // Alan Touring <-- Variable was renamed.

Finally, we can set default values with renaming too:

const name = 'Jamie Corkhill';

const person = {
    location: 'New York City, United States'
};

const { name: personName = 'Anonymous', location } = person;

console.log(name); // Jamie Corkhill
console.log(personName); // Anonymous
console.log(location); // New York City, United States

As you can see, in this case, name from person (person.name) will be renamed to personName and set to the default value of Anonymous if non-existent.

And of course, the same can be performed in function signatures:

const personOne = {
    name:  'User One',
    password:  'BCrypt Hash'
};

const personTwo = {
    password:  'BCrypt Hash'
};

const  createUser  = ({  name: personName =  'Anonymous', password }) => {
    if (!password) throw  new  Error('InvalidArgumentException');
    console.log(personName);
    console.log(password);

    return {
        id: Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15),
        name: personName,
        password: password // <-- We’ll discuss this next.
    };
}

createUser(personOne); // User One, BCrypt Hash
createUser(personTwo); // Anonymous, BCrypt Hash

ES6 Object Shorthand

Suppose you have the following factory: (we’ll cover factories later)

const createPersonFactory = (name, location, position) => ({
    name: name,
    location: location,
    position: position
});

One might use this factory to create a person object, as follows. Also, note that the factory is implicitly returning an object, evident by the parentheses around the brackets of the Arrow Function.

const person = createPersonFactory('Jamie', 'Texas', 'Developer');
console.log(person); // { ... }

That’s what we already know from the ES5 Object Literal Syntax. Notice, however, in the factory function, that the value of each property is the same name as the property identifier (key) itself. That is — location: location or name: name. It turned out that that was a pretty common occurrence with JS developers.

With the shorthand syntax from ES6, we may achieve the same result by rewriting the factory as follows:

const createPersonFactory = (name, location, position) => ({
    name,
    location,
    position
});

const person = createPersonFactory('Jamie', 'Texas', 'Developer');
console.log(person);

Producing the output:

{ name: 'Jamie', location: 'Texas', position: 'Developer' }

It’s important to realize that we can only use this shorthand when the object we wish to create is being dynamically created based on variables, where the variable names are the same as the names of the properties to which we want the variables assigned.

This same syntax works with object values:

const createPersonFactory = (name, location, position, extra) => ({
    name,
    location,
    position,
    extra        // <- right here. 
});

const extra = {
    interests: [
        'Mathematics',
        'Quantum Mechanics',
        'Spacecraft Launch Systems'
    ],
    favoriteLanguages: [
        'JavaScript',
        'C#'
    ]
};

const person = createPersonFactory('Jamie', 'Texas', 'Developer', extra);
console.log(person);

Producing the output:

{ 
    name: 'Jamie',
    location: 'Texas',
    position: 'Developer',
    extra: { 
        interests: [ 
            'Mathematics',
            'Quantum Mechanics',
            'Spacecraft Launch Systems' 
        ],
        favoriteLanguages: [ 'JavaScript', 'C#' ]
     } 
}

As a final example, this works with object literals as well:

const id = '314159265358979';
const name = 'Archimedes of Syracuse';
const location = 'Syracuse';

const greatMathematician = {
    id,
    name,
    location
};

ES6 Spread Operator (…)

The Spread Operator permits us to do a variety of things, some of which we’ll discuss here.

Firstly, we can spread out properties from one object on to another object:

const myObjOne = { a: 'a', b: 'b' };
const myObjTwo = { ...myObjOne }:

This has the effect of placing all properties on myObjOne onto myObjTwo, such that myObjTwo is now { a: 'a', b: 'b' }. We can use this method to override previous properties. Suppose a user wants to update their account:

const user = {
    name: 'John Doe', 
    email: 'john@domain.com',
    password: '',
    bio: 'Lorem ipsum'
};

const updates = {
    password: '',
    bio: 'Ipsum lorem',
    email: 'j@domain.com'
};

const updatedUser = {
    ...user,    // <- original
    ...updates  // <- updates
};

console.log(updatedUser);

/*
 {
     name: 'John Doe',
     email: 'j@domain.com',    // Updated
     password: '',   // Updated
     bio: 'Ipsum lorem'
 }
 */

The same can be performed with arrays:

const apollo13Astronauts = ['Jim', 'Jack', 'Fred'];
const apollo11Astronauts = ['Neil', 'Buz', 'Michael'];

const unionOfAstronauts = [...apollo13Astronauts, ...apollo11Astronauts];

console.log(unionOfAstronauts);
// ['Jim', 'Jack', 'Fred', 'Neil', 'Buz, 'Michael'];

Notice here that we created a union of both sets (arrays) by spreading the arrays out into a new array.

There is a lot more to the Rest/Spread Operator, but it is out of scope for this article. It can be used to attain multiple arguments to a function, for example. If you want to learn more, view the MDN Documentation here.

ES6 Async/Await

Async/Await is a syntax to ease the pain of promise chaining.

The await reserved keyword permits you to “await” the settling of a promise, but it may only be used in functions marked with the async keyword. Suppose I have a function that returns a promise. In a new async function, I can await the result of that promise instead of using .then and .catch.

// Returns a promise.
const myFunctionThatReturnsAPromise = () => {
    return new Promise((resolve, reject) => {
        setTimeout(() => resolve('Hello'), 3000);
    });
}

const myAsyncFunction = async () => {
    const promiseResolutionResult = await myFunctionThatReturnsAPromise();
    console.log(promiseResolutionResult);
};

// Writes the log statement after three seconds.
myAsyncFunction();

There are a few things to note here. When we use await in an async function, only the resolved value goes into the variable on the left-hand side. If the function rejects, that’s an error that we have to catch, as we’ll see in a moment. Additionally, any function marked async will, by default, return a promise.

Let’s suppose I needed to make two API calls, one with the response from the former. Using promises and promise chaining, you might do it this way:

const makeAPICall = route => new  Promise((resolve, reject) => {
    console.log(route)
    resolve(route);
});

const main = () => {
    makeAPICall('/whatever')
        .then(response => makeAPICall(response + ' second call'))
        .then(response => console.log(response + ' logged'))
        .catch(err => console.error(err))
};

main();

// Result:
/* 
/whatever 
/whatever second call 
/whatever second call logged
*/

What’s happening here is that we first call makeAPICall passing to it /whatever, which gets logged the first time. The promise resolves with that value. Then we call makeAPICall again, passing to it /whatever second call, which gets logged, and again, the promise resolves with that new value. Finally, we take that new value /whatever second call which the promise just resolved with, and log it ourselves in the final log, appending on logged at the end. If this doesn’t make sense, you should look into promise chaining.

Using async/await, we can refactor to the following:

const main = async () => {
    const resultOne = await makeAPICall('/whatever');
    const resultTwo = await makeAPICall(resultOne + ' second call');
    console.log(resultTwo + ' logged');
};

Here is what will happen. The entire function will stop executing at the very first await statement until the promise from the first call to makeAPICall resolves, upon resolution, the resolved value will be placed in resultOne. When that happens, the function will move to the second await statement, again pausing right there for the duration of the promise settling. When the promise resolves, the resolution result will be placed in resultTwo. If the idea about function execution sounds blocking, fear not, it’s still asynchronous, and I’ll discuss why in a minute.

This only depicts the “happy” path. In the event that one of the promises reject, we can catch that with try/catch, for if the promise rejects, an error will be thrown — which will be whatever error the promise rejected with.

const main = async () => {
    try {
        const resultOne = await makeAPICall('/whatever');
        const resultTwo = await makeAPICall(resultOne + ' second call');
        console.log(resultTwo + ' logged');
    } catch (e) {
        console.log(e)
    }
};

As I said earlier, any function declared async will return a promise. So, if you want to call an async function from another function, you can use normal promises, or await if you declare the calling function async. However, if you want to call an async function from top-level code and await its result, then you’d have to use .then and .catch.

For example:

const returnNumberOne = async () => 1;

returnNumberOne().then(value => console.log(value)); // 1

Or, you could use an Immedieately Invoked Function Expression (IIFE):

(async () => {
    const value = await returnNumberOne();
    console.log(value); // 1
})();

When you use await in an async function, the execution of the function will stop at that await statement until the promise settles. However, all other functions are free to proceed with execution, thus no extra CPU resources are allocated nor is the thread ever blocked. I’ll say that again — operations in that specific function at that specific time will stop until the promise settles, but all other functions are free to fire. Consider an HTTP Web Server — on a per-request basis, all functions are free to fire for all users concurrently as requests are made, it’s just that the async/await syntax will provide the illusion that an operation is synchronous and blocking as to make promises easier to work with, but again, everything will remain nice and async.

This isn’t all there is to async/await, but it should help you to grasp the basic principles.

Classical OOP Factories

We are now going to leave the JavaScript world and enter the Java world. There can come a time when the creation process of an object (in this case, an instance of a class — again, Java) is fairly complex or when we want to have different objects produced based upon a series of parameters. An example might be a function that creates different error objects. A factory is a common design pattern in Object-Oriented Programming and is basically a function that creates objects. To explore this, let us move away from JavaScript into the world of Java. This will make sense to developers who come from a Classical OOP (i.e, not prototypal), statically typed language background. If you are not one such developer, feel free to skip this section. This is a small deviation, and so if following along here interrupts your flow of JavaScript, then again, please skip this section.

A common creational pattern, the Factory Pattern permits us to create objects without exposing the required business logic to perform said creation.

Suppose we are writing a program that permits us to visualize primitive shapes in n-dimensions. If we provide a cube, for example, we’d see a 2D cube (a square), a 3D cube (a cube), and a 4D cube (a Tesseract, or Hypercube). Here is how this might be done, trivially, and barring the actual drawing part, in Java.

// Main.java

// Defining an interface for the shape (can be used as a base type)
interface IShape {
    void draw();
}

// Implementing the interface for 2-dimensions:
class TwoDimensions implements IShape {
    @Override
    public void draw() {
        System.out.println("Drawing a shape in 2D.");
    }
}

// Implementing the interface for 3-dimensions:
class ThreeDimensions implements IShape {
    @Override
    public void draw() {
        System.out.println("Drawing a shape in 3D.");
    }
}

// Implementing the interface for 4-dimensions:
class FourDimensions implements IShape {
    @Override
    public void draw() {
        System.out.println("Drawing a shape in 4D.");
    }
}

// Handles object creation
class ShapeFactory {
    // Factory method (notice return type is the base interface)
    public IShape createShape(int dimensions) {
        switch(dimensions) {
            case 2:
                return new TwoDimensions();
            case 3:
                return new ThreeDimensions();
            case 4:
                return new FourDimensions();
            default: 
                throw new IllegalArgumentException("Invalid dimension.");
        }
    }
}

// Main class and entry point.
public class Main {
    public static void main(String[] args) throws Exception {
        ShapeFactory shapeFactory = new ShapeFactory();
        IShape fourDimensions = shapeFactory.createShape(4);
        fourDimensions.draw(); // Drawing a shape in 4D.
    }
}

As you can see, we define an interface that specifies a method for drawing a shape. By having the different classes implement the interface, we can guarantee that all shapes can be drawn (for they all must have an overridable draw method as per the interface definition). Considering this shape is drawn differently depending upon the dimensions within which it’s viewed, we define helper classes that implement the interface as to perform the GPU intensive work of simulating n-dimensional rendering. ShapeFactory does the work of instantiating the correct class — the createShape method is a factory, and like the definition above, it is a method that returns an object of a class. The return type of createShape is the IShape interface because the IShape interface is the base type of all shapes (because they have a draw method).

This Java example is fairly trivial, but you can easily see how useful it becomes in larger applications where the setup to create an object might not be so simple. An example of this would be a video game. Suppose the user has to survive different enemies. Abstract classes and interfaces might be used to define core functions available to all enemies (and methods that can be overridden), perhaps employing the delegation pattern (favor composition over inheritance as the Gang of Four suggested so you don’t get locked into extending a single base class and to make testing/mocking/DI easier). For enemy objects instantiated in different ways, the interface would permit factory object creation while relying on the generic interface type. This would be very relevant if the enemy was created dynamically.

Another example is a builder function. Suppose we utilize the Delegation Pattern to have a class delegate work to other classes that honor an interface. We could place a static build method on the class to have it construct its own instance (assuming you were not using a Dependency Injection Container/Framework). Instead of having to call each setter, you can do this:

public class User {
    private IMessagingService msgService;
    private String name;
    private int age;
    
    public User(String name, int age, IMessagingService msgService) {
        this.name = name;
        this.age = age;
        this.msgService = msgService;
    }
    
    public static User build(String name, int age) {
        return new User(name, age, new SomeMessageService());
    }
}

I’ll be explaining the Delegation Pattern in a later article if you’re not familiar with it — basically, through Composition and in terms of object-modeling, it creates a “has-a” relationship instead of an “is-a” relationship as you’d get with inheritance. If you have a Mammal class and a Dog class, and Dog extends Mammal, then a Dog is-a Mammal. Whereas, if you had a Bark class, and you just passed instances of Bark into the constructor of Dog, then Dog has-a Bark. As you might imagine, this especially makes unit testing easier, for you can inject mocks and assert facts about the mock as long as mock honors the interface contract in the testing environment.

The static “build” factory method above simply creates a new object of User and passes a concrete MessageService in. Notice how this follows from the definition above — not exposing the business logic to create an object of a class, or, in this case, not exposing the creation of the messaging service to the caller of the factory.

Again, this is not necessarily how you would do things in the real world, but it presents the idea of a factory function/method quite well. We might use a Dependency Injection container instead, for example. Now back to JavaScript.

Starting With Express

Express is a Web Application Framework for Node (available via an NPM Module) that permits one to create an HTTP Web Server. It’s important to note that Express is not the only framework to do this (there exists Koa, Fastify, etc.), and that, as seen in the previous article, Node can function without Express as a stand-alone entity. (Express is merely a module that was designed for Node — Node can do many things without it, although Express is popular for Web Servers).

Again, let me make a very important distinction. There is a dichotomy present between Node/JavaScript and Express. Node, the runtime/environment within which you run JavaScript, can do many things — such as permitting you to build React Native apps, desktop apps, command-line tools, etc. — Express is nothing but a lightweight framework that permits you to use Node/JS to build web servers as opposed to dealing with Node’s low-level network and HTTP APIs. You don’t need Express to build a web server.

Before starting this section, if you are not familiar with HTTP and HTTP Requests (GET, POST, etc.), then I encourage you to read the corresponding section of my former article, which is linked above.

Using Express, we’ll set up different routes to which HTTP Requests may be made, as well as the related endpoints (which are callback functions) that will fire when a request is made to that route. Don’t worry if routes and endpoints are currently non-sensical — I’ll be explaining them later.

Unlike other articles, I’ll take the approach of writing the source code as we go, line-by-line, rather than dumping the entire codebase into one snippet and then explaining later. Let’s begin by opening a terminal (I’m using Terminus on top of Git Bash on Windows — which is a nice option for Windows users who want a Bash Shell without setting up the Linux Subsystem), setting up our project’s boilerplate, and opening it in Visual Studio Code.

mkdir server && cd server
touch server.js
npm init -y
npm install express
code .

Inside the server.js file, I’ll begin by requiring express using the require() function.

const express = require('express');

require('express') tells Node to go out and get the Express module we installed earlier, which is currently inside the node_modules folder (for that’s what npm install does — create a node_modules folder and puts modules and their dependencies in there). By convention, and when dealing with Express, we call the variable that holds the return result from require('express') express, although it may be called anything.

This returned result, which we have called express, is actually a function — a function we’ll have to invoke to create our Express app and set up our routes. Again, by convention, we call this appapp being the return result of express() — that is, the return result of calling the function that has the name express as express().

const express = require('express'); 
const app = express();

// Note that the above variable names are the convention, but not required.
// An example such as that below could also be used.

const foo = require('express');
const bar = foo();

// Note also that the node module we installed is called express.

The line const app = express(); simply puts a new Express Application inside of the app variable. It calls a function named express (the return result of require('express')) and stores its return result in a constant named app. If you come from an object-oriented programming background, consider this equivalent to instantiating a new object of a class, where app would be the object and where express() would call the constructor function of the express class. Remember, JavaScript allows us to store functions in variables — functions are first-class citizens. The express variable, then, is nothing more than a mere function. It’s provided to us by the developers of Express.

I apologize in advance if I’m taking a very long time to discuss what is actually very basic, but the above, although primitive, confused me quite a lot when I was first learning back-end development with Node.

Inside the Express source code, which is open-source on GitHub, the variable we called express is a function entitled createApplication, which, when invoked, performs the work necessary to create an Express Application:

A snippet of Express source code:

exports  =  module.exports  = createApplication;

/*
 * Create an express application
 */

// This is the function we are storing in the express variable. (- Jamie)
function createApplication() {

   // This is what I mean by "Express App" (- Jamie)
   var app = function(req, res, next) {

      app.handle(req, res, next);

   };

   mixin(app, EventEmitter.prototype, false);
   mixin(app, proto, false);

   // expose the prototype that will get set on requests

   app.request = Object.create(req, {

      app: { configurable: true, enumerable: true, writable: true, value: app      }

   })

   // expose the prototype that will get set on responses

   app.response = Object.create(res, {

      app: { configurable: true, enumerable: true, writable: true, value: app }

   })

   app.init();

   // See - `app` gets returned. (- Jamie)
   return app;
}

GitHub: https://github.com/expressjs/express/blob/master/lib/express.js

With that short deviation complete, let’s continue setting up Express. Thus far, we have required the module and set up our app variable.

const express = require('express');
const app = express();

From here, we have to tell Express to listen on a port. Any HTTP Requests made to the URL and Port upon which our application is listening will be handled by Express. We do that by calling app.listen(...), passing to it the port and a callback function which gets called when the server starts running:

const PORT = 3000;

app.listen(PORT, () => console.log(`Server is up on port {PORT}.`));

We notate the PORT variable in capital by convention, for it is a constant variable that will never change. You could do that with all variables that you declare const, but that would look messy. It’s up to the developer or development team to decide on notation, so we’ll use the above sparsely. I use const everywhere as a method of “defensive coding” — that is, if I know that a variable is never going to change then I might as well just declare it const. Since I define everything const, I make the distinction between what variables should remain the same on a per-request basis and what variables are true actual global constants.

Here is what we have thus far:

const express = require('express'); 
const app = express(); 

const PORT = 3000;

// We will build our API here.
// ...

// Binding our application to port 3000.
app.listen(PORT, () => {
   console.log(`Server is up on port ${PORT}.`);
});

Let’s test this to see if the server starts running on port 3000.

I’ll open a terminal and navigate to our project’s root directory. I’ll then run node server/server.js. Note that this assumes you have Node already installed on your system (You can check with node -v).

If everything works, you should see the following in the terminal:

Server is up on port 3000.

Go ahead and hit Ctrl + C to bring the server back down.

If this doesn’t work for you, or if you see an error such as EADDRINUSE, then it means you may have a service already running on port 3000. Pick another port number, like 3001, 3002, 5000, 8000, etc. Be aware, lower number ports are reserved and there is an upper bound of 65535.

At this point, it’s worth taking another small deviation as to understand servers and ports in the context of computer networking. We’ll return to Express in a moment. I take this approach, rather than introducing servers and ports first, for the purpose of relevance. That is, it is difficult to learn a concept if you fail to see its applicability. In this way, you are already aware of the use case for ports and servers with Express, so the learning experience will be more pleasurable.

A Brief Look At Servers And Ports

A server is simply a computer or computer program that provides some sort of “functionality” to the clients that talk to it. More generally, it’s a device, usually connected to the Internet, that handles connections in a pre-defined manner. In our case, that “pre-defined manner” will be HTTP or the HyperText Transfer Protocol. Servers that use the HTTP Protocol are called Web Servers.

When building an application, the server is a critical component of the “client-server model”, for it permits the sharing and syncing of data (generally via databases or file systems) across devices. It’s a cross-platform approach, in a way, for the SDKs of platforms against which you may want to code — be they web, mobile, or desktop — all provide methods (APIs) to interact with a server over HTTP or TCP/UDP Sockets. It’s important to make a distinction here — by APIs, I mean programming language constructs to talk to a server, like XMLHttpRequest or the Fetch API in JavaScript, or HttpUrlConnection in Java, or even HttpClient in C#/.NET. This is different from the kind of REST API we’ll be building in this article to perform CRUD Operations on a database.

To talk about ports, it’s important to understand how clients connect to a server. A client requires the IP Address of the server and the Port Number of our specific service on that server. An IP Address, or Internet Protocol Address, is just an address that uniquely identifies a device on a network. Public and private IPs exist, with private addresses commonly used behind a router or Network Address Translator on a local network. You might see private IP Addresses of the form 192.168.XXX.XXX or 10.0.XXX.XXX. When articulating an IP Address, decimals are called “dots”. So 192.168.0.1 (a common router IP Addr.) might be pronounced, “one nine two dot one six eight dot zero dot one”. (By the way, if you’re ever in a hotel and your phone/laptop won’t direct you to the AP captive portal, try typing 192.168.0.1 or 192.168.1.1 or similar directly into Chrome).

For simplicity, and since this is not an article about the complexities of computer networking, assume that an IP Address is equivalent to a house address, allowing you to uniquely identify a house (where a house is analogous to a server, client, or network device) in a neighborhood. One neighborhood is one network. Put together all of the neighborhoods in the United States, and you have the public Internet. (This is a basic view, and there are many more complexities — firewalls, NATs, ISP Tiers (Tier One, Tier Two, and Tier Three), fiber optics and fiber optic backbones, packet switches, hops, hubs, etc., subnet masks, etc., to name just a few — in the real networking world.) The traceroute Unix command can provide more insight into the above, displaying the path (and associated latency) that packets take through a network as a series of “hops”.

A Port Number identifies a specific service running on a server. SSH, or Secure Shell, which permits remote shell access to a device, commonly runs on port 22. FTP or File Transfer Protocol (which might, for example, be used with an FTP Client to transfer static assets to a server) commonly runs on Port 21. We might say, then, that ports are specific rooms inside each house in our analogy above, for rooms in houses are made for different things — a bedroom for sleeping, a kitchen for food preparation, a dining room for consumption of said food, etc., just like ports correspond to programs that perform specific services. For us, Web Servers commonly run on Port 80, although you are free to specify whichever Port Number you wish as long they are not in use by some other service (they can’t collide).

In order to access a website, you need the IP Address of the site. Despite that, we normally access websites via a URL. Behind the scenes, a DNS, or Domain Name Server, converts that URL into an IP Address, allowing the browser to make a GET Request to the server, get the HTML, and render it to the screen. 8.8.8.8 is the address of one of Google’s Public DNS Servers. You might imagine that requiring the resolution of a hostname to an IP Address via a remote DNS Server will take time, and you’d be right. To reduce latency, Operating Systems have a DNS Cache — a temporary database that stores DNS lookup information, thereby reducing the frequency of which said lookups must occur. The DNS Resolver Cache can be viewed on Windows with the ipconfig /displaydns CMD command and purged via the ipconfig /flushdns command.

On a Unix Server, more common lower number ports, like 80, require root level (escalated if you come from a Windows background) privileges. For that reason, we’ll be using port 3000 for our development work, but will allow the server to choose the port number (whatever is available) when we deploy to our production environment.

Finally, note that we can type IP Addresses directly in Google Chrome’s search bar, thus bypassing the DNS Resolution mechanism. Typing 216.58.194.36, for example, will take you to Google.com. In our development environment, when using our own computer as our dev server, we’ll be using localhost and port 3000. An address is formatted as hostname:port, so our server will be up on localhost:3000. Localhost, or 127.0.0.1, is the loopback address, and means the address of “this computer”. It is a hostname, and its IPv4 address resolves to 127.0.0.1. Try pinging localhost on your machine right now. You might get ::1 back — which is the IPv6 loopback address, or 127.0.0.1 back — which is the IPv4 loopback address. IPv4 and IPv6 are two different IP Address formats associated with different standards — some IPv6 addresses can be converted to IPv4 but not all.

Returning To Express

I mentioned HTTP Requests, Verbs, and Status Codes in my previous article, Get Started With Node: An Introduction To APIs, HTTP And ES6+ JavaScript. If you do not have a general understanding of the protocol, feel free to jump to the “HTTP and HTTP Requests” section of that piece.

In order to get a feel for Express, we are simply going to set up our endpoints for the four fundamental operations we’ll be performing on the database — Create, Read, Update, and Delete, known collectively as CRUD.

Remember, we access endpoints by routes in the URL. That is, although the words “route” and “endpoint” are commonly used interchangeably, an endpoint is technically a programming language function (like ES6 Arrow Functions) that performs some server-side operation, while a route is what the endpoint is located behind of. We specify these endpoints as callback functions, which Express will fire when the appropriate request is made from the client to the route behind which the endpoint lives. You can remember the above by realizing that it is endpoints that perform a function and the route is the name that is used to access the endpoints. As we’ll see, the same route can be associated with multiple endpoints by using different HTTP Verbs (similar to method overloading if you come from a classical OOP background with Polymorphism).

Keep in mind, we are following REST (REpresentational State Transfer) Architecture by permitting clients to make requests to our server. This is, after all, a REST or RESTful API. Specific requests made to specific routes will fire specific endpoints which will do specific things. An example of such a “thing” that an endpoint might do is adding new data to a database, removing data, updating data, etc.

Express knows what endpoint to fire because we tell it, explicitly, the request method (GET, POST, etc.) and the route — we define what functions to fire for specific combinations of the above, and the client makes the request, specifying a route and method. To put this more simply, with Node, we’ll tell Express — “Hey, if someone makes a GET Request to this route, then go ahead and fire this function (use this endpoint)”. Things can get more complicated: “Express, if someone makes a GET Request to this route, but they don’t send up a valid Authorization Bearer Token in the header of their request, then please respond with an HTTP 401 Unauthorized. If they do possess a valid Bearer Token, then please send down whatever protected resource they were looking for by firing the endpoint. Thanks very much and have a nice day.” Indeed, it’d be nice if programming languages could be that high level without leaking ambiguity, but it nonetheless demonstrates the basic concepts.

Remember, the endpoint, in a way, lives behind the route. So it’s imperative that the client provides, in the header of the request, what method it wants to use so that Express can figure out what to do. The request will be made to a specific route, which the client will specify (along with the request type) when contacting the server, allowing Express to do what it needs to do and us to do what we need to do when Express fires our callbacks. That’s what it all comes down to.

In the code examples earlier, we called the listen function which was available on app, passing to it a port and callback. app itself, if you remember, is the return result from calling the express variable as a function (that is, express()), and the express variable is what we named the return result from requiring 'express' from our node_modules folder. Just like listen is called on app, we specify HTTP Request Endpoints by calling them on app. Let’s look at GET:

app.get('/my-test-route', () => {
   // ...
});

The first parameter is a string, and it is the route behind which the endpoint will live. The callback function is the endpoint. I’ll say that again: the callback function — the second parameter — is the endpoint that will fire when an HTTP GET Request is made to whatever route we specify as the first argument (/my-test-route in this case).

Now, before we do any more work with Express, we need to know how routes work. The route we specify as a string will be called by making the request to www.domain.com/the-route-we-chose-earlier-as-a-string. In our case, the domain is localhost:3000, which means, in order to fire the callback function above, we have to make a GET Request to localhost:3000/my-test-route. If we used a different string as the first argument above, the URL would have to be different to match what we specified in JavaScript.

When talking about such things, you’ll likely hear of Glob Patterns. We could say that all of our API’s routes are located at the localhost:3000/** Glob Pattern, where ** is a wildcard meaning any directory or sub-directory (note that routes are not directories) to which root is a parent — that is, everything.

Let’s go ahead and add a log statement into that callback function so that altogether we have:

// Getting the module from node_modules.
const express = require('express');

// Creating our Express Application.
const app = express();

// Defining the port we’ll bind to.
const PORT = 3000;

// Defining a new endpoint behind the "/my-test-route" route.
app.get('/my-test-route', () => {
   console.log('A GET Request was made to /my-test-route.');
});

// Binding the server to port 3000.
app.listen(PORT, () => {
   console.log(`Server is up on port ${PORT}.`)
});

We’ll get our server up and running by executing node server/server.js (with Node installed on our system and accessible globally from system environment variables) in the project’s root directory. Like earlier, you should see the message that the server is up in the console. Now that the server is running, open a browser, and visit localhost:3000 in the URL bar.

You should be greeted with an error message that states Cannot GET /. Press Ctrl + Shift + I on Windows in Chrome to view the developer console. In there, you should see that we have a 404 (Resource not found). That makes sense — we have only told the server what to do when someone visits localhost:3000/my-test-route. The browser has nothing to render at localhost:3000 (which is equivalent to localhost:3000/ with a slash).

If you look at the terminal window where the server is running, there should be no new data. Now, visit localhost:3000/my-test-route in your browser’s URL bar. You might see the same error in Chrome’s Console (because the browser is caching the content and still has no HTML to render), but if you view your terminal where the server process is running, you’ll see that the callback function did indeed fire and the log message was indeed logged.

Shut down the server with Ctrl + C.

Now, let’s give the browser something to render when a GET Request is made to that route so we can lose the Cannot GET / message. I’m going to take our app.get() from earlier, and in the callback function, I’m going to add two arguments. Remember, the callback function we are passing in is getting called by Express behind the scenes, and Express can add whatever arguments it wants. It actually adds two (well, technically three, but we’ll see that later), and while they are both extremely important, we don’t care about the first one for now. The second argument is called res, short for response, and I’ll access it by setting undefined as the first parameter:

app.get('/my-test-route', (undefined, res) => {
    console.log('A GET Request was made to /my-test-route.');
});

Again, we can call the res argument whatever we want, but res is convention when dealing with Express. res is actually an object, and upon it exist different methods for sending data back to the client. In this case, I’m going to access the send(...) function available on res to send back HTML which the browser will render. We are not limited to sending back HTML, however, and can choose to send back text, a JavaScript Object, a stream (streams are especially beautiful), or whatever.

app.get('/my-test-route', (undefined, res) => {
    console.log('A GET Request was made to /my-test-route.');
    res.send('<h1>Hello, World!</h1>');
});

If you shut down the server and then bring it back up, and then refresh your browser at the /my-test-route route, you’ll see the HTML get rendered.

The Network Tab of the Chrome Developer Tools will allow you to see this GET Request with more detail as it pertains to headers.

At this point, it’ll serve us well to start learning about Express Middleware — functions that can be fired globally after a client makes a request.

Express Middleware

Express provides methods by which to define custom middleware for your application. Indeed, the meaning of Express Middleware is best defined in the Express Docs, here)

Middleware functions are functions that have access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. The next middleware function is commonly denoted by a variable named next.

Middleware functions can perform the following tasks:

  • Execute any code.
  • Make changes to the request and the response objects.
  • End the request-response cycle.
  • Call the next middleware function in the stack.

In other words, a middleware function is a custom function that we (the developer) can define, and that will act as an intermediary between when Express receives the request and when our appropriate callback function fires. We might make a log function, for example, that will log every time a request is made. Note that we can also choose to make these middleware functions fire after our endpoint has fired, depending upon where you place it in the stack — something we’ll see later.

In order to specify custom middleware, we have to define it as a function and pass it into app.use(...).

const myMiddleware = (req, res, next) => {
    console.log(`Middleware has fired at time ${Date().now}`);
    next();
}

app.use(myMiddleware); // This is the app variable returned from express().

All together, we now have:

// Getting the module from node_modules.  
const express =  require('express');  

// Creating our Express Application.  
const app =  express();  

// Our middleware function.
const myMiddleware = (req, res, next) => {
    console.log(`Middleware has fired at time ${Date().now}`);
    next();
}

// Tell Express to use the middleware.
app.use(myMiddleware);

// Defining the port we’ll bind to.  
const PORT =  3000;  

// Defining a new endpoint behind the "/my-test-route" route. 
app.get('/my-test-route', () => { 
    console.log('A GET Request was made to /my-test-route.');  
});  

// Binding the server to port 3000. 
app.listen(PORT, () => { 
    console.log(`Server is up on port ${PORT}.`)  
});

If you make the requests through the browser again, you should now see that your middleware function is firing and logging timestamps. To foster experimentation, try removing the call to the next function and see what happens.

The middleware callback function gets called with three arguments, req, res, and next. req is the parameter we skipped over when building out the GET Handler earlier, and it is an object containing information regarding the request, such as headers, custom headers, parameters, and any body that might have been sent up from the client (such as you do with a POST Request). I know we are talking about middleware here, but both the endpoints and the middleware function get called with req and res. req and res will be the same (unless one or the other mutates it) in both the middleware and the endpoint within the scope of a single request from the client. That means, for example, you could use a middleware function to sanitize data by stripping any characters that might be aimed at performing SQL or NoSQL Injections, and then handing the safe req to the endpoint.

res, as seen earlier, permits you to send data back to the client in a handful of different ways.

next is a callback function that you have to execute when the middleware has finished doing its job in order to call the next middleware function in the stack or the endpoint. Be sure to take note that you will have to call this in the then block of any async functions you fire in the middleware. Depending on your async operation, you may or may not want to call it in the catch block. That is, the myMiddleware function fires after the request is made from the client but before the endpoint function of the request is fired. When we execute this code and make a request, you should see the Middleware has fired... message before the A GET Request was made to... message in the console. If you don’t call next(), the latter part will never run — your endpoint function to the request will not fire.

Note also that I could have defined this function anonymously, as such (a convention to which I’ll be sticking):

app.use((req, res, next) => {
    console.log(`Middleware has fired at time ${Date().now}`);
    next();
});

For anyone new to JavaScript and ES6, if the way in which the above works does not make immediate sense, the below example should help. We are simply defining a callback function (the anonymous function) which takes another callback function (next) as an argument. We call a function that takes a function argument a Higher Order Function. Look at it the below way — it depicts a basic example of how the Express Source Code might work behind the scenes:

console.log('Suppose a request has just been made from the client.n');

// This is what (it’s not exactly) the code behind app.use() might look like.
const use = callback => { 
    // Simple log statement to see where we are.
    console.log('Inside use() - the "use" function has been called.');

    // This depicts the termination of the middleware.
    const next = () => console.log('Terminating Middleware!n');

    // Suppose req and res are defined above (Express provides them).
    const req = res = null;

    // "callback" is the "middleware" function that is passed into "use".
    // "next" is the above function that pretends to stop the middleware.
    callback(req, res, next);
};

// This is analogous to the middleware function we defined earlier.
// It gets passed in as "callback" in the "use" function above.
const myMiddleware = (req, res, next) => {
    console.log('Inside the myMiddleware function!');
    next();
}

// Here, we are actually calling "use()" to see everything work. 
use(myMiddleware);

console.log('Moving on to actually handle the HTTP Request or the next middleware function.');

We first call use which takes myMiddleware as an argument. myMiddleware, in and of itself, is a function which takes three arguments – req, res, and next. Inside use, myMiddlware is called, and those three arguments are passed in. next is a function defined in use. myMiddleware is defined as callback in the use method. If I’d placed use, in this example, on an object called app, we could have mimicked Express’s setup entirely, albeit without any sockets or network connectivity.

In this case, both myMiddleware and callback are Higher Order Functions, because they both take functions as arguments.

If you execute this code, you will see the following response:

Suppose a request has just been made from the client. 

Inside use() - the "use" function has been called. 
Inside the middleware function! 
Terminating Middleware! 

Moving on to actually handle the HTTP Request or the next middleware function.

Note that I could have also used anonymous functions to achieve the same result:

console.log('Suppose a request has just been made from the client.');

// This is what (it’s not exactly) the code behind app.use() might look like.
const use = callback => {
    // Simple log statement to see where we are.
    console.log('Inside use() - the "use" function has been called.');

    // This depicts the termination of the middlewear.  
    const  next  =  ()  => console.log('Terminating Middlewear!');

    // Suppose req and res are defined above (Express provides them).
    const req = res = null;

    // "callback" is the function which is passed into "use".
    // "next" is the above function that pretends to stop the middlewear.
    callback(req, res, () => {
        console.log('Terminating Middlewear!');
    });
};

// Here, we are actually calling "use()" to see everything work.
use((req, res, next) => {
    console.log('Inside the middlewear function!');
    next();
});

console.log('Moving on to actually handle the HTTP Request.');

With that hopefully settled, we can now return to the actual task at hand — setting up our middleware.

The fact of the matter is, you will commonly have to send data up through an HTTP Request. You have a few different options for doing so — sending up URL Query Parameters, sending up data that will be accessible on the req object that we learned about earlier, etc. That object is not only available in the callback to calling app.use(), but also to any endpoint. We used undefined as a filler earlier so we could focus on res to send HTML back to the client, but now, we need access to it.

app.use('/my-test-route', (req, res) => {
    // The req object contains client-defined data that is sent up.
    // The res object allows the server to send data back down.
});

HTTP POST Requests might require that we send a body object up to the server. If you have a form on the client, and you take the user’s name and email, you will likely send that data to the server on the body of the request.

Let’s take a look at what that might look like on the client side:

<!DOCTYPE html> 
<html> 
    <body> 
        <form action="http://localhost:3000/email-list" method="POST" > 
            <input type="text" name="nameInput">
            <input type="email" name="emailInput"> 
            <input type="submit">
       </form> 
   </body> 
</html>

On the server side:

app.post('/email-list', (req, res) => {
    // What do we now? 
    // How do we access the values for the user’s name and email?
});

To access the user’s name and email, we’ll have to use a particular type of middleware. This will put the data on an object called body available on req. Body Parser was a popular method of doing this, available by the Express developers as a standalone NPM module. Now, Express comes pre-packaged with its own middleware to do this, and we’ll call it as so:

app.use(express.urlencoded({ extended: true }));

Now we can do:

app.post('/email-list', (req, res) => {
    console.log('User Name: ', req.body.nameInput);
    console.log('User Email: ', req.body.emailInput);
});

All this does is take any user-defined input which is sent up from the client, and makes them available on the body object of req. Note that on req.body, we now have nameInput and emailInput, which are the names of the input tags in the HTML. Now, this client-defined data should be considered dangerous (never, never trust the client), and needs to be sanitized, but we’ll cover that later.

Another type of middleware provided by express is express.json(). express.json is used to package any JSON Payloads sent up in a request from the client onto req.body, while express.urlencoded will package any incoming requests with strings, arrays, or other URL Encoded data onto req.body. In short, both manipulate req.body, but .json() is for JSON Payloads and .urlencoded() is for, among others, POST Query Parameters.

Another way of saying this is that incoming requests with a Content-Type: application/json header (such as specifying a POST Body with the fetch API) will be handled by express.json(), while requests with header Content-Type: application/x-www-form-urlencoded (such as HTML Forms) will be handled with express.urlencoded(). This hopefully now makes sense.

Starting Our CRUD Routes For MongoDB

Note: When performing PATCH Requests in this article, we won’t follow the JSONPatch RFC Spec — an issue we’ll rectify in the next article of this series.

Considering that we understand that we specify each endpoint by calling the relevant function on app, passing to it the route and a callback function containing the request and response objects, we can begin to define our CRUD Routes for the Bookshelf API. Indeed, and considering this is an introductory article, I won’t be taking care to follow HTTP and REST specifications completely, nor will I attempt to use the cleanest possible architecture. That will come in a future article.

I’ll open up the server.js file that we have been using thus far and empty everything out as to start from the below clean slate:

// Getting the module from node_modules.
const express = require('express'); 

// This creates our Express App.
const app = express(); 

// Define middleware.
app.use(express.json());
app.use(express.urlencoded({ extended: true ));

// Listening on port 3000 (arbitrary).
// Not a TCP or UDP well-known port. 
// Does not require superuser privileges.
const PORT = 3000;

// We will build our API here.
// ...

// Binding our application to port 3000.
app.listen(PORT, () => console.log(`Server is up on port ${PORT}.`));

Consider all following code to take up the // ... portion of the file above.

To define our endpoints, and because we are building a REST API, we should discuss the proper way to name routes. Again, you should take a look at the HTTP section of my former article for more information. We are dealing with books, so all routes will be located behind /books (the plural naming convention is standard).

Request Route
POST /books
GET /books/id
PATCH /books/id
DELETE /books/id

As you can see, an ID does not need to be specified when POSTing a book because we’ll (or rather, MongoDB), will be generating it for us, automatically, server-side. GETting, PATCHing, and DELETing books will all require that we do pass that ID to our endpoint, which we’ll discuss later. For now, let’s simply create the endpoints:

// HTTP POST /books
app.post('/books', (req, res) => {
    // ...
    console.log('A POST Request was made!');
});

// HTTP GET /books/:id
app.get('/books/:id', (req, res) => {
    // ...
    console.log(`A GET Request was made! Getting book ${req.params.id}`);
});

// HTTP PATCH /books/:id
app.patch('/books/:id', (req, res) => {
    // ...
    console.log(`A PATCH Request was made! Updating book ${req.params.id}`);
});

// HTTP DELETE /books/:id
app.delete('/books/:id', (req, res) => {
    // ...
    console.log(`A DELETE Request was made! Deleting book ${req.params.id}`);
});

The :id syntax tells Express that id is a dynamic parameter that will be passed up in the URL. We have access to it on the params object which is available on req. I know “we have access to it on req” sounds like magic and magic (which doesn’t exist) is dangerous in programming, but you have to remember that Express is not a black box. It’s an open-source project available on GitHub under an MIT LIcense. You can easily view it’s source code if you want to see how dynamic query parameters are put onto the req object.

All together, we now have the following in our server.js file:

// Getting the module from node_modules.
const express = require('express'); 

// This creates our Express App.
const app = express(); 

// Define middleware.
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Listening on port 3000 (arbitrary).
// Not a TCP or UDP well-known port. 
// Does not require superuser privileges.
const PORT = 3000;

// We will build our API here.
// HTTP POST /books
app.post('/books', (req, res) => {
    // ...
    console.log('A POST Request was made!');
});

// HTTP GET /books/:id
app.get('/books/:id', (req, res) => {
    // ...
    console.log(`A GET Request was made! Getting book ${req.params.id}`);
});

// HTTP PATCH /books/:id
app.patch('/books/:id', (req, res) => {
    // ...
    console.log(`A PATCH Request was made! Updating book ${req.params.id}`);
});

// HTTP DELETE /books/:id
app.delete('/books/:id', (req, res) => {
    // ...
    console.log(`A DELETE Request was made! Deleting book ${req.params.id}`);
});

// Binding our application to port 3000.
app.listen(PORT, () => console.log(`Server is up on port ${PORT}.`));

Go ahead and start the server, running node server.js from the terminal or command line, and visit your browser. Open the Chrome Development Console, and in the URL (Uniform Resource Locator) Bar, visit localhost:3000/books. You should already see the indicator in your OS’s terminal that the server is up as well as the log statement for GET.

Thus far, we’ve been using a web browser to perform GET Requests. That is good for just starting out, but we’ll quickly find that better tools exist to test API routes. Indeed, we could paste fetch calls directly into the console or use some online service. In our case, and to save time, we’ll use cURL and Postman. I use both in this article (although you could use either or) so that I can introduce them for if you haven’t used them. cURL is a library (a very, very important library) and command-line tool designed to transfer data using various protocols. Postman is a GUI based tool for testing APIs. After following the relevant installation instructions for both tools on your operating system, ensure your server is still running, and then execute the following commands (one-by-one) in a new terminal. It’s important that you type them and execute them individually, and then watch the log message in the separate terminal from your server. Also, note that the standard programming language comment symbol // is not a valid symbol in Bash or MS-DOS. You’ll have to omit those lines, and I only use them here to describe each block of cURL commands.

// HTTP POST Request (Localhost, IPv4, IPv6)
curl -X POST http://localhost:3000/books
curl -X POST http://127.0.0.1:3000/books
curl -X POST http://[::1]:3000/books

// HTTP GET Request (Localhost, IPv4, IPv6)
curl -X GET http://localhost:3000/books/123abc
curl -X GET http://127.0.0.1:3000/books/book-id-123
curl -X GET http://[::1]:3000/books/book-abc123

// HTTP PATCH Request (Localhost, IPv4, IPv6)
curl -X PATCH http://localhost:3000/books/456
curl -X PATCH http://127.0.0.1:3000/books/218
curl -X PATCH http://[::1]:3000/books/some-id

// HTTP DELETE Request (Localhost, IPv4, IPv6)
curl -X DELETE http://localhost:3000/books/abc
curl -X DELETE http://127.0.0.1:3000/books/314
curl -X DELETE http://[::1]:3000/books/217

As you can see, the ID that is passed in as a URL Parameter can be any value. The -X flag specifies the type of HTTP Request (it can be omitted for GET), and we provide the URL to which the request will be made thereafter. I’ve duplicated each request three times, allowing you to see that everything still works whether you use the localhost hostname, the IPv4 Address (127.0.0.1) to which localhost resolves, or the IPv6 Address (::1) to which localhost resolves. Note that cURL requires wrapping IPv6 Addresses in square brackets.

We are in a decent place now — we have the simple structure of our routes and endpoints set up. The server runs correctly and accepts HTTP Requests as we expect it to. Contrary to what you might expect, there is not long to go at this point — we just have to set up our database, host it (using a Database-as-a-Service — MongoDB Atlas), and persist data to it (and perform validation and create error responses).

Setting Up A Production MongoDB Database

To set up a production database, we’ll head over to the MongoDB Atlas Home Page and sign up for a free account. Thereafter, create a new cluster. You can maintain the default settings, picking a fee tier applicable region. Then hit the “Create Cluster” button. The cluster will take some time to create, and then you’ll be able to attain your database URL and password. Take note of these when you see them. We’ll hardcode them for now, and then store them in environment variables later for security purposes. For help in creating and connecting to a cluster, I’ll refer you to the MongoDB Documentation, particularly this page and this page, or you can leave a comment below and I’ll try to help.

Creating A Mongoose Model

It’s recommended that you have an understanding of the meanings of Documents and Collections in the context of NoSQL (Not Only SQL — Structured Query Language). For reference, you might want to read both the Mongoose Quick Start Guide and the MongoDB section of my former article.

We now have a database that is ready to accept CRUD Operations. Mongoose is a Node module (or ODM — Object Document Mapper) that will allow us to perform those operations (abstracting away some of the complexities) as well as set up the schema, or structure, of the database collection.

As an important disclaimer, there is a lot of controversy around ORMs and such patterns as Active Record or Data Mapper. Some developers swear by ORMs and others swear against them (believing they get in the way). It’s also important to note that ORMs abstract a lot away like connection pooling, socket connections, and handling, etc. You could easily use the MongoDB Native Driver (another NPM Module), but it would talk a lot more work. While it’s recommended that you play with the Native Driver before using ORMs, I omit the Native Driver here for brevity. For complex SQL operations on a Relational Database, not all ORMs will be optimized for query speed, and you may end up writing your own raw SQL. ORMs can come into play a lot with Domain-Driven Design and CQRS, among others. They are an established concept in the .NET world, and the Node.js community has not completely caught up yet — TypeORM is better, but it’s not NHibernate or Entity Framework.

To create our Model, I’ll create a new folder in the server directory entitled models, within which I’ll create a single file with the name book.js. Thus far, our project’s directory structure is as follows:

- server
  - node_modules
  - models
    - book.js
  - package.json
  - server.js

Indeed, this directory structure is not required, but I use it here because it’s simple. Allow me to note that this is not at all the kind of architecture you want to use for larger applications (and you might not even want to use JavaScript — TypeScript could be a better option), which I discuss in this article’s closing. The next step will be to install mongoose, which is performed via, as you might expect, npm i mongoose.

The meaning of a Model is best ascertained from the Mongoose documentation:

Models are fancy constructors compiled from Schema definitions. An instance of a model is called a document. Models are responsible for creating and reading documents from the underlying MongoDB database.

Before creating the Model, we’ll define its Schema. A Schema will, among others, make certain expectations about the value of the properties provided. MongoDB is schemaless, and thus this functionality is provided by the Mongoose ODM. Let’s start with a simple example. Suppose I want my database to store a user’s name, email address, and password. Traditionally, as a plain old JavaScript Object (POJO), such a structure might look like this:

const userDocument = {
    name: 'Jamie Corkhill',
    email: 'jamie@domain.com',
    password: 'Bcrypt Hash'
};

If that above object was how we expected our user’s object to look, then we would need to define a schema for it, like this:

const schema = {
    name: {
        type: String,
        trim: true,
        required: true
    },
    email: {
        type: String,
        trim: true,
        required: true
    },
    password: {
        type: String,
        required: true
    }
};

Notice that when creating our schema, we define what properties will be available on each document in the collection as an object in the schema. In our case, that’s name, email, and password. The fields type, trim, required tell Mongoose what data to expect. If we try to set the name field to a number, for example, or if we don’t provide a field, Mongoose will throw an error (because we are expecting a type of String), and we can send back a 400 Bad Request to the client. This might not make sense right now because we have defined an arbitrary schema object. However, the fields of type, trim, and required (among others) are special validators that Mongoose understands. trim, for example, will remove any whitespace from the beginning and end of the string. We’ll pass the above schema to mongoose.Schema() in the future and that function will know what to do with the validators.

Understanding how Schemas work, we’ll create the model for our Books Collection of the Bookshelf API. Let’s define what data we require:

  1. Title

  2. ISBN Number

  3. Author

    1. First Name

    2. Last Name

  4. Publishing Date

  5. Finished Reading (Boolean)

I’m going to create this in the book.js file we created earlier in /models. Like the example above, we’ll be performing validation:

const mongoose = require('mongoose');

// Define the schema:
const mySchema = {
    title: {
        type: String,
        required: true,
        trim: true,
    },
    isbn: {
        type: String,
        required: true,
        trim: true,
    },
    author: {
        firstName:{
            type: String,
            required: true,
            trim: true
        },
        lastName: {
            type: String,
            required: true,
            trim: true
        }
    },
    publishingDate: {
        type: String
    },
    finishedReading: {
        type: Boolean,
        required: true,
        default: false
    }
}

default will set a default value for the property if none is provided — finishedReading for example, although a required field, will be set automatically to false if the client does not send one up.

Mongoose also provides the ability to perform custom validation on our fields, which is done by supplying the validate() method, which attains the value that was attempted to be set as its one and only parameter. In this function, we can throw an error if the validation fails. Here is an example:

// ...
isbn: {
    type: String,
    required: true,
    trim: true,
    validate(value) {
        if (!validator.isISBN(value)) {
            throw new Error('ISBN is invalid.');
        }
    }
}
// ...

Now, if anyone supplies an invalid ISBN to our model, Mongoose will throw an error when trying to save that document to the collection. I’ve already installed the NPM module validator via npm i validator and required it. validator contains a bunch of helper functions for common validation requirements, and I use it here instead of RegEx because ISBNs can’t be validated with RegEx alone due to a tailing checksum. Remember, users will be sending a JSON body to one of our POST routes. That endpoint will catch any errors (such as an invalid ISBN) when attempting to save, and if one is thrown, it’ll return a blank response with an HTTP 400 Bad Request status — we haven’t yet added that functionality.

Finally, we have to define our schema of earlier as the schema for our model, so I’ll make a call to mongoose.Schema() passing in that schema:

const bookSchema = mongoose.Schema(mySchema);

To make things more precise and clean, I’ll replace the mySchema variable with the actual object all on one line:

const bookSchema = mongoose.Schema({
    title:{
        type: String,
        required: true,
        trim: true,
    },
    isbn:{
        type: String,
        required: true,
        trim: true,
        validate(value) {
           if (!validator.isISBN(value)) {
                throw new Error('ISBN is invalid.');
           }
        }
    },
    author:{
        firstName: {
            type: String
            required: true,
            trim: true
        },
        lastName:{
            type: String,
            required: true,
            trim: true
        }
    },
    publishingDate:{
        type: String
    },
    finishedReading:{
        type: Boolean,
        required: true,
        default: false
    }
});

Let’s take a final moment to discuss this schema. We are saying that each of our documents will consist of a title, an ISBN, an author with a first and last name, a publishing date, and a finishedReading boolean.

  1. title will be of type String, it’s a required field, and we’ll trim any whitespace.
  2. isbn will be of type String, it’s a required field, it must match the validator, and we’ll trim any whitespace.
  3. author is of type object containing a required, trimmed, string firstName and a required, trimmed, string lastName.
  4. publishingDate is of type String (although we could make it of type Date or Number for a Unix timestamp.
  5. finishedReading is a required boolean that will default to false if not provided.

With our bookSchema defined, Mongoose knows what data and what fields to expect within each document to the collection that stores books. However, how do we tell it what collection that specific schema defines? We could have hundreds of collections, so how do we correlate, or tie, bookSchema to the Book collection?

The answer, as seen earlier, is with the use of models. We’ll use bookSchema to create a model, and that model will model the data to be stored in the Book collection, which will be created by Mongoose automatically.

Append the following lines to the end of the file:

const Book = mongoose.model('Book', bookSchema);

module.exports = Book;

As you can see, we have created a model, the name of which is Book (— the first parameter to mongoose.model()), and also provided the ruleset, or schema, to which all data is saved in the Book collection will have to abide. We export this model as a default export, allowing us to require the file for our endpoints to access. Book is the object upon which we’ll call all of the required functions to Create, Read, Update, and Delete data which are provided by Mongoose.

Altogether, our book.js file should look as follows:

const mongoose = require('mongoose');
const validator = require('validator');

// Define the schema.
const bookSchema = mongoose.Schema({
    title:{
        type: String,
        required: true,
        trim: true,
    },
    isbn:{
        type: String,
        required: true,
        trim: true,
        validate(value) {
            if (!validator.isISBN(value)) {
                throw new Error('ISBN is invalid.');
            }
        }
    },
    author:{
        firstName: {
            type: String,
            required: true,
            trim: true
        },
        lastName:{
            type: String,
            required: true,
            trim: true
        }
    },
    publishingDate:{
        type: String
    },
    finishedReading:{
        type: Boolean,
        required: true,
        default: false
    }
});

// Create the "Book" model of name Book with schema bookSchema.
const Book = mongoose.model('Book', bookSchema);

// Provide the model as a default export.
module.exports = Book;

Connecting To MongoDB (Basics)

Don’t worry about copying down this code. I’ll provide a better version in the next section. To connect to our database, we’ll have to provide the database URL and password. We’ll call the connect method available on mongoose to do so, passing to it the required data. For now, we are going hardcode the URL and password — an extremely frowned upon technique for many reasons: namely the accidental committing of sensitive data to a public (or private made public) GitHub Repository. Realize also that commit history is saved, and that if you accidentally commit a piece of sensitive data, removing it in a future commit will not prevent people from seeing it (or bots from harvesting it), because it’s still available in the commit history. CLI tools exist to mitigate this issue and remove history.

As stated, for now, we’ll hard code the URL and password, and then save them to environment variables later. At this point, let’s look at simply how to do this, and then I’ll mention a way to optimize it.

const mongoose = require('mongoose');

const MONGODB_URL = 'Your MongoDB URL';

mongoose.connect(MONGODB_URL, {
    useNewUrlParser: true,
    useCreateIndex: true,
    useFindAndModify: false,
    useUnifiedTopology: true
});

This will connect to the database. We provide the URL that we attained from the MongoDB Atlas dashboard, and the object passed in as the second parameter specifies features to use as to, among others, prevent deprecation warnings.

Mongoose, which uses the core MongoDB Native Driver behind the scenes, has to attempt to keep up with breaking changes made to the driver. In a new version of the driver, the mechanism used to parse connection URLs was changed, so we pass the useNewUrlParser: true flag to specify that we want to use the latest version available from the official driver.

By default, if you set indexes (and they are called “indexes” not “indices”) (which we won’t cover in this article) on data in your database, Mongoose uses the ensureIndex() function available from the Native Driver. MongoDB deprecated that function in favor of createIndex(), and so setting the flag useCreateIndex to true will tell Mongoose to use the createIndex() method from the driver, which is the non-deprecated function.

Mongoose’s original version of findOneAndUpdate (which is a method to find a document in a database and update it) pre-dates the Native Driver version. That is, findOneAndUpdate() was not originally a Native Driver function but rather one provided by Mongoose, so Mongoose had to use findAndModify provided behind the scenes by the driver to create findOneAndUpdate functionality. With the driver now updated, it contains its own such function, so we don’t have to use findAndModify. This might not make sense, and that’s okay — it’s not an important piece of information on the scale of things.

Finally, MongoDB deprecated its old server and engine monitoring system. We use the new method with useUnifiedTopology: true.

What we have thus far is a way to connect to the database. But here’s the thing — it’s not scalable or efficient. When we write unit tests for this API, the unit tests are going to use their own test data (or fixtures) on their own test databases. So, we want a way to be able to create connections for different purposes — some for testing environments (that we can spin up and tear down at will), others for development environments, and others for production environments. To do that, we’ll build a factory. (Remember that from earlier?)

Connecting To Mongo — Building An Implementation Of A JS Factory

Indeed, Java Objects are not analogous at all to JavaScript Objects, and so, subsequently, what we know above from the Factory Design Pattern won’t apply. I merely provided that as an example to show the traditional pattern. To attain an object in Java, or C#, or C++, etc., we have to instantiate a class. This is done with the new keyword, which instructs the compiler to allocate memory for the object on the heap. In C++, this gives us a pointer to the object that we have to clean up ourselves so we don’t have hanging pointers or memory leaks (C++ has no garbage collector, unlike Node/V8 which is built on C++) In JavaScript, the above need not be done — we don’t need to instantiate a class to attain an object — an object is just {}. Some people will say that everything in JavaScript is an object, although that is technically not true because primitive types are not objects.

For the above reasons, our JS Factory will be simpler, sticking to the loose definition of a factory being a function that returns an object (a JS object). Since a function is an object (for function inherits from object via prototypal inheritance), our below example will meet this criterion. To implement the factory, I’ll create a new folder inside of server called db. Within db I’ll create a new file called mongoose.js. This file will make connections to the database. Inside of mongoose.js, I’ll create a function called connectionFactory and export it by default:

// Directory - server/db/mongoose.js

const mongoose = require('mongoose');

const MONGODB_URL = 'Your MongoDB URL';

const connectionFactory = () => {
    return mongoose.connect(MONGODB_URL, {
        useNewUrlParser: true,
        useCreateIndex: true,
        useFindAndModify: false
    });
};

module.exports = connectionFactory;

Using the shorthand provided by ES6 for Arrow Functions that return one statement on the same line as the method signature, I’ll make this file simpler by getting rid of the connectionFactory definition and just exporting the factory by default:

// server/db/mongoose.js
const mongoose = require('mongoose');

const MONGODB_URL = 'Your MongoDB URL';

module.exports = () => mongoose.connect(MONGODB_URL, {
    useNewUrlParser: true,
    useCreateIndex: true,
    useFindAndModify: true
});

Now, all one has to do is require the file and call the method that gets exported, like this:

const connectionFactory = require('./db/mongoose');
connectionFactory();

// OR

require('./db/mongoose')();

You could invert control by having your MongoDB URL be provided as a parameter to the factory function, but we are going to dynamically change the URL as an environment variable based on environment.

The benefits of making our connection as a function are that we can call that function later in code to connect to the database from files aimed at production and those aimed at local and remote integration testing both on-device and with a remote CI/CD pipeline/build server.

Building Our Endpoints

We now begin to add very simple CRUD related logic to our endpoints. As previously stated, a short disclaimer is in order. The methods by which we go about implementing our business logic here are not ones that you should mirror for anything other than simple projects. Connecting to databases and performing logic directly within endpoints is (and should be) frowned upon, for you lose the ability to swap out services or DBMSs without having to perform an application wide refactor. Nonetheless, considering this is a beginner’s article, I employ these bad practices here. A future article in this series will discuss how we can increase both the complexity and the quality of our architecture.

For now, let’s go back to our server.js file and ensure we both have the same starting point. Notice I added the require statement for our database connection factory and I imported the model we exported from ./models/book.js.

const express = require('express'); 

// Database connection and model.
require('./db/mongoose.js');
const Book = require('./models/book.js');

// This creates our Express App.
const app = express(); 

// Define middleware.
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Listening on port 3000 (arbitrary).
// Not a TCP or UDP well-known port. 
// Does not require superuser privileges.
const PORT = 3000;

// We will build our API here.
// HTTP POST /books
app.post('/books', (req, res) => {
    // ...
    console.log('A POST Request was made!');
});

// HTTP GET /books/:id
app.get('/books/:id', (req, res) => {
    // ...
    console.log(`A GET Request was made! Getting book ${req.params.id}`);
});

// HTTP PATCH /books/:id
app.patch('/books/:id', (req, res) => {
    // ...
    console.log(`A PATCH Request was made! Updating book ${req.params.id}`);
});

// HTTP DELETE /books/:id
app.delete('/books/:id', (req, res) => {
    // ...
    console.log(`A DELETE Request was made! Deleting book ${req.params.id}`);
});

// Binding our application to port 3000.
app.listen(PORT, () => console.log(`Server is up on port ${PORT}.`));

I’m going to start with app.post(). We have access to the Book model because we exported it from the file within which we created it. As stated in the Mongoose docs, Book is constructable. To create a new book, we call the constructor and pass the book data in, as follows:

const book = new Book(bookData);

In our case, we’ll have bookData as the object sent up in the request, which will be available on req.body.book. Remember, express.json() middleware will put any JSON data that we send up onto req.body. We are to send up JSON in the following format:

{
    "book": {
        "title": "The Art of Computer Programming",
        "isbn": "ISBN-13: 978-0-201-89683-1",
        "author": { 
            "firstName": "Donald", 
            "lastName": "Knuth" 
        }, 
        "publishingDate": "July 17, 1997",
        "finishedReading": true
    }
}

What that means, then, is that the JSON we pass up will get parsed, and the entire JSON object (the first pair of braces) will be placed on req.body by the express.json() middleware. The one and only property on our JSON object is book, and thus the book object will be available on req.body.book.

At this point, we can call the model constructor function and pass in our data:

app.post('/books', async (req, res) => {    // <- Notice 'async'
    const book = new Book(req.body.book);
    await book.save();                      // <- Notice 'await'
});

Notice a few things here. Calling the save method on the instance we get back from calling the constructor function will persist the req.body.book object to the database if and only if it complies with the schema we defined in the Mongoose model. The act of saving data to a database is an asynchronous operation, and this save() method returns a promise — the settling of which we much await. Rather than chain on a .then() call, I use the ES6 Async/Await syntax, which means I must make the callback function to app.post async.

book.save() will reject with a ValidationError if the object the client sent up does not comply with the schema we defined. Our current setup makes for some very flaky and badly written code, for we don’t want our application to crash in the event of a failure regarding validation. To fix that, I’ll surround the dangerous operation in a try/catch clause. In the event of an error, I’ll return an HTTP 400 Bad Request or an HTTP 422 Unprocessable Entity. There is some amount of debate over which to use, so I’ll stick with a 400 for this article since it is more generic.

app.post('/books', async (req, res) => { 
    try {
        const book = new Book(req.body.book);
        await book.save();    
        return res.status(201).send({ book });
    } catch (e) {
        return res.status(400).send({ error: 'ValidationError' });
    }
});

Notice that I use the ES6 Object Shorthand to just return the book object right back to the client in the success case with res.send({ book }) — that would be equivalent to res.send({ book: book }). I also return the expression just to make sure my function exits. In the catch block, I set the status to be 400 explicitly, and return the string ‘ValidationError’ on the error property of the object that gets sent back. A 201 is the success path status code meaning “CREATED”.

Indeed, this isn’t the best solution either because we can’t really be sure the reason for failure was a Bad Request on the client’s side. Maybe we lost connection (supposed a dropped socket connection, thus a transient exception) to the database, in which case we should probably return a 500 Internal Server error. A way to check this would be to read the e error object and selectively return a response. Let’s do that now, but as I’ve said multiple times, a followup article will discuss proper architecture in terms of Routers, Controllers, Services, Repositories, custom error classes, custom error middleware, custom error responses, Database Model/Domain Entity data mapping, and Command Query Separation (CQS).

app.post('/books', async (req, res) => {
    try {
        const book =  new  Book(req.body.book);
        await book.save();
        return res.send({ book });
    } catch (e) {
        if (e instanceof mongoose.Error.ValidationError) {
            return res.status(400).send({  error:  'ValidationError' });
        } else {
            return res.status(500).send({  error:  'Internal Error' });
        }
    }
});

Go ahead and open Postman (assuming you have it, otherwise, download and install it) and create a new request. We’ll be making a POST Request to localhost:3000/books. Under the “Body” tab within the Postman Request section, I’ll select the “raw” radio button and select “JSON” in the dropdown button to the far right. This will go ahead and automatically add the Content-Type: application/json header to the request. I’ll then copy and paste the Book JSON Object from earlier into the Body text area. This is what we have:

The Postman GUI populated with data for the POST Request.

Data to populate Postman fields with for our POST Request. (Large preview)

Thereafter, I’ll hit the send button, and you should see a 201 Created response in the “Response” section of Postman (the bottom row). We see this because we specifically asked Express to respond with a 201 and the Book object — had we just done res.send() with no status code, express would have automatically responded with a 200 OK. As you can see, the Book object is now saved to the database and has been returned to the client as the Response to the POST Request.

The Postman GUI populated with response data from the POST Request.

JSON Payload Response to our POST Request. (Large preview)

If you view the database Book collection through MongoDB Atlas, you’ll see that the book was indeed saved.

You can also tell that MongoDB has inserted the __v and _id fields. The former represents the version of the document, in this case, 0, and the latter is the document’s ObjectID — which is automatically generated by MongoDB and is guaranteed to have a low collision probability.

A Summary Of What We Have Covered Thus Far

We have covered a lot thus far in the article. Let’s take a short reprieve by going over a brief summary before returning to finish the Express API.

We learned about ES6 Object Destructuring, the ES6 Object Shorthand Syntax, as well as the ES6 Rest/Spread operator. All three of those let us do the following (and more, as discussed above):

// Destructuring Object Properties:
const { a: newNameA = 'Default', b } = { a: 'someData', b: 'info' };
console.log(`newNameA: ${newNameA}, b: ${b}`); // newNameA: someData, b: info

// Destructuring Array Elements
const [elemOne, elemTwo] = [() => console.log('hi'), 'data'];
console.log(`elemOne(): ${elemOne()}, elemTwo: ${elemTwo}`); // elemOne(): hi, elemTwo: data

// Object Shorthand
const makeObj = (name) => ({ name });
console.log(`makeObj('Tim'): ${JSON.stringify(makeObj('Tim'))}`); // makeObj('Tim'): { "name": "Tim" }

// Rest, Spread
const [c, d, ...rest] = [0, 1, 2, 3, 4];
console.log(`c: ${c}, d: ${d}, rest: ${rest}`) // c: 0, d: 1, rest: 2, 3, 4

We also covered Express, Expess Middleware, Servers, Ports, IP Addressing, etc. Things got interesting when we learned that there exist methods availabile on the return result from require('express')(); with the names of the HTTP Verbs, such as app.get and app.post.

If that require('express')() part didn’t make sense to you, this was the point I was making:

const express = require('express');
const app = express();
app.someHTTPVerb

It should make sense in the same way that we fired off the connection factory before for Mongoose.

Each route handler, which is the endpoint function (or callback function), gets passed in a req object and a res object from Express behind the scenes. (They technically also get next, as we’ll see in a minute). req contains data specific to the incoming request from the client, such as headers or any JSON sent up. res is what permits us to return responses to the client. The next function is also passed into handlers.

With Mongoose, we saw how we can connect to the database with two methods — a primitive way and a more advanced/practical way that borrows from the Factory Pattern. We’ll end up using this when we discuss Unit and Integration Testing with Jest (and mutation testing) because it’ll permit us to spin up a test instance of the DB populated with seed data against which we can run assertions.

After that, we created a Mongoose schema object and used it to create a model, and then learned how we can call the constructor of that model to create a new instance of it. Available on the instance is a save method (among others), which is asynchronous in nature, and which will check that the object structure we passed in complies with the schema, resolving the promise if it does, and rejecting the promise with a ValidationError if it does not. In the event of a resolution, the new document is saved to the database and we respond with an HTTP 200 OK/201 CREATED, otherwise, we catch the thrown error in our endpoint, and return an HTTP 400 Bad Request to the client.

As we continue you building out our endpoints, you’ll learn more about some of the methods available on the model and the model instance.

Finishing Our Endpoints

Having completed the POST Endpoint, let’s handle GET. As I mentioned earlier, the :id syntax inside the route lets Express know that id is a route parameter, accessible from req.params. You already saw that when you match some ID for the param “wildcard” in the route, it was printed to the screen in the early examples. For instance, if you made a GET Request to “/books/test-id-123”, then req.params.id would be the string test-id-123 because the param name was id by having the route as HTTP GET /books/:id.

So, all we need to do is retrieve that ID from the req object and check to see if any document in our database has the same ID — something made very easy by Mongoose (and the Native Driver).

app.get('/books/:id', async (req, res) => {
    const book = await Book.findById(req.params.id);
    console.log(book);
    res.send({ book });
});

You can see that accessible upon our model is a function we can call that will find a document by its ID. Behind the scenes, Mongoose will cast whatever ID we pass into findById to the type of the _id field on the document, or in this case, an ObjectId. If a matching ID is found (and only one will ever be found for ObjectId has an extremely low collision probability), that document will be placed in our book constant variable. If not, book will be null — a fact we’ll use in the near future.

For now, let’s restart the server (you must restart the server unless you’re using nodemon) and ensure that we still have the one book document from before inside the Books Collection. Go ahead and copy the ID of that document, the highlighted portion of the image below:

The Book Document ObjectID

An example of an ObjectID to use for the upcoming GET Request. (Large preview)

And use it to make a GET Request to /books/:id with Postman as follows (note that the body data is just left over from my earlier POST Request. It’s not actually being used despite the fact that it’s depicted in the image below):

The Postman GUI populated with data for the GET Request.

API URL and Postman data for GET Request. (Large preview)

Upon doing so, you should get the book document with the specified ID back inside the Postman response section. Notice that earlier, with the POST Route, which is designed to “POST” or “push” new resources to the server, we responded with a 201 Created — because a new resource (or document) was created. In the case of GET, nothing new was created — we just requested a resource with a specific ID, thus a 200 OK status code is what we got back, instead of 201 Created.

As is common in the field of software development, edge cases must be accounted for — user input is inherently unsafe and erroneous, and it’s our job, as developers, to be flexible to the types of input we can be given and to respond to them accordingly. What do we do if the user (or the API Caller) passes us some ID that can’t be cast to a MongoDB ObjectID, or an ID that can be cast but that doesn’t exist?

For the former case, Mongoose is going to throw a CastError — which is understandable because if we provide an ID like math-is-fun, then that’s obviously not something that can be cast to an ObjectID, and casting to an ObjectID is specifically what Mongoose is doing under the hood.

For the latter case, we could easily rectify the issue via a Null Check or a Guard Clause. Either way, I’m going to send back and HTTP 404 Not Found Response. I’ll show you a few ways we can do this, a bad way and then a better way.

Firstly, we could do the following:

app.get('/books/:id', async (req, res) => {
    try {
        const book = await Book.findById(req.params.id);
        
        if (!book) throw new Error();
    
        return res.send({ book });
    } catch (e) {
        return res.status(404).send({ error: 'Not Found' });
    }
});

This works and we can use it just fine. I expect that the statement await Book.findById() will throw a Mongoose CastError if the ID string can’t be cast to an ObjectID, causing the catch block to execute. If it can be cast but the corresponding ObjectID does not exist, then book will be null and the Null Check will throw an error, again firing the catch block. Inside catch, we just return a 404. There are two problems here. First, even if the Book is found but some other unknown error occurs, we send back a 404 when we should probably give the client a generic catch-all 500. Second, we are not really differentiating between whether the ID sent up is valid but non-existent, or whether it’s just a bad ID.

So, here is another way:

const mongoose = require('mongoose');

app.get('/books/:id', async (req, res) => {
    try {
        const book = await Book.findById(req.params.id);
        
        if (!book) return res.status(404).send({ error: 'Not Found' });
        
        return res.send({ book });
    } catch (e) {
        if (e instanceof mongoose.Error.CastError) {
            return res.status(400).send({ error: 'Not a valid ID' });
        } else {
            return res.status(500).send({ error: 'Internal Error' });
        }
    }
});

The nice thing about this is that we can handle all three cases of a 400, a 404 and a generic 500. Notice that after the Null Check on book, I use the return keyword on my response. This is very important because we want to make sure we exit the route handler there.

Some other options might be for us to check if the id on req.params can be cast to an ObjectID explicitly as opposed to permitting Mongoose to cast implicitly with mongoose.Types.ObjectId.isValid('id);, but there is an edge case with 12-byte strings that causes this to sometimes work unexpectedly.

We could make said repetition less painful with Boom, an HTTP Response library, for example, or we could employ Error Handling Middleware. We could also transform Mongoose Errors into something more readable with Mongoose Hooks/Middleware as described here. An additional option would be to define custom error objects and use global Express Error Handling Middleware, however, I’ll save that for an upcoming article wherein we discuss better architectural methods.

In the endpoint for PATCH /books/:id, we’ll expect an update object to be passed up containing updates for the book in question. For this article, we’ll allow all fields to be updated, but in the future, I’ll show how we can disallow updates of particular fields. Additionally, you’ll see that the error handling logic in our PATCH Endpoint will be the same as our GET Endpoint. That’s an indication that we are violating DRY Principles, but again, we’ll touch on that later.

I’m going to expect that all updates are available on the updates object of req.body (meaning the client will send up JSON containing an updates object) and will use the Book.findByAndUpdate function with a special flag to perform the update.

app.patch('/books/:id', async (req, res) => {
    const { id } = req.params;
    const { updates } = req.body;
    
    try {
        const updatedBook = await Book.findByIdAndUpdate(id, updates, { runValidators: true, new: true });
        
        if (!updatedBook) return res.status(404).send({ error: 'Not Found' });
        
        return res.send({ book: updatedBook });
    } catch (e) {
        if (e instanceof mongoose.Error.CastError) {
            return res.status(400).send({ error: 'Not a valid ID' });
        } else {
            return res.status(500).send({ error: 'Internal Error' });
        }
    }
});

Notice a few things here. We first destructure id from req.params and updates from req.body.

Available on the Book model is a function by the name of findByIdAndUpdate that takes the ID of the document in question, the updates to perform, and an optional options object. Normally, Mongoose won’t re-perform validation for update operations, so the runValidators: true flag we pass in as the options object forces it to do so. Furthermore, as of Mongoose 4, Model.findByIdAndUpdate no longer returns the modified document but returns the original document instead. The new: true flag (which is false by default) overrides that behavior.

Finally, we can build out our DELETE endpoint, which is quite similar to all of the others:

app.delete('/books/:id', async (req, res) => {
    try {
        const deletedBook = await Book.findByIdAndDelete(req.params.id);
        
        if (!deletedBook) return res.status(404).send({ error: 'Not Found' });
        
        return res.send({ book: deletedBook });
    } catch (e) {
        if (e instanceof mongoose.Error.CastError) {
            return res.status(400).send({ error: 'Not a valid ID' });
        } else {
            return res.status(500).send({ error: 'Internal Error' });
        }
    }
});

With that, our primitive API is complete and you can test it by making HTTP Requests to all endpoints.

A Short Disclaimer About Architecture And How We’ll Rectify It

From an architectural standpoint, the code we have here is quite bad, it’s messy, it’s not DRY, it’s not SOLID, in fact, you might even call it abhorrent. These so-called “Route Handlers” are doing a lot more than just “handing routes” — they are directly interfacing with our database. That means there is absolutely no abstraction.

Let’s face it, most applications will never be this small or you could probably get away with serverless architectures with the Firebase Database. Maybe, as we’ll see later, users want the ability to upload avatars, quotes, and snippets from their books, etc. Maybe we want to add a live chat feature between users with WebSockets, and let’s even go as far as saying we’ll open up our application to let users borrow books with one another for a small charge — at which point we need to consider Payment Integration with the Stripe API and shipping logistics with the Shippo API.

Suppose we proceed with our current architecture and add all of this functionality. These route handers, also known as Controller Actions, are going to end up being very, very large with a high cyclomatic complexity. Such a coding style might suit us fine in the early days, but what if we decide that our data is referential and thus PostgreSQL is a better database choice than MongoDB? We now have to refactor our entire application, stripping out Mongoose, altering our Controllers, etc., all of which could lead to potential bugs in the rest of the business logic. Another such example would be that of deciding that AWS S3 is too expensive and we wish to migrate to GCP. Again, this requires an application-wide refactor.

Although there are many opinions around architecture, from Domain-Driven Design, Command Query Responsibility Segregation, and Event Sourcing, to Test-Driven Development, SOILD, Layered Architecture, Onion Architecture, and more, we’ll focus on implementing simple Layered Architecture in future articles, consisting of Controllers, Services, and Repositories, and employing Design Patterns like Composition, Adapters/Wrappers, and Inversion of Control via Dependency Injection. While, to an extent, this could be somewhat performed with JavaScript, we’ll look into TypeScript options to achieve this architecture as well, permitting us to employ functional programming paradigms such as Either Monads in addition to OOP concepts like Generics.

For now, there are two small changes we can make. Because our error handling logic is quite similar in the catch block of all endpoints, we can extract it to a custom Express Error Handling Middleware function at the very end of the stack.

Cleaning Up Our Architecture

At present, we are repeating a very large amount of error handling logic across all our endpoints. Instead, we can build an Express Error Handling Middleware function, which is an Express Middleware Function that gets called with an error, the req and res objects, and the next function.

For now, let’s build that middleware function. All I’m going to do is repeat the same error handling logic we are used to:

app.use((err, req, res, next) => {
    if (err instanceof mongoose.Error.ValidationError) {
        return res.status(400).send({  error:  'Validation Error' });
    } else if (err instanceof mongoose.Error.CastError) {
        return res.status(400).send({  error:  'Not a valid ID' });
    } else {
        console.log(err); // Unexpected, so worth logging.
        return res.status(500).send({  error:  'Internal error' });
    }
});

This doesn’t appear to work with Mongoose Errors, but in general, rather than using if/else if/else to determine error instances, you can switch over the error’s constructor. I’ll leave what we have, however.

In a synchronous endpoint/route handler, if you throw an error, Express will catch it and process it with no extra work required on your part. Unfortunately, that’s not the case for us. We are dealing with asynchronous code. In order to delegate error handling to Express with async route handlers, we much catch the error ourselves and pass it to next().

So, I’ll just permit next to be the third argument into the endpoint, and I’ll remove the error handling logic in the catch blocks in favor of just passing the error instance to next, as such:

app.post('/books', async (req, res, next) => {
    try {
        const book =  new  Book(req.body.book);
        await book.save();
        return res.send({ book });
    } catch (e) {
        next(e)
    }
});

If you do this to all route handlers, you should end up with the following code:

const express = require('express'); 
const mongoose = require('mongoose');

// Database connection and model.
require('./db/mongoose.js')();
const Book = require('./models/book.js');

// This creates our Express App.
const app = express(); 

// Define middleware.
app.use(express.json());
app.use(express.urlencoded({ extended: true }));

// Listening on port 3000 (arbitrary).
// Not a TCP or UDP well-known port. 
// Does not require superuser privileges.
const PORT = 3000;

// We will build our API here.
// HTTP POST /books
app.post('/books', async (req, res, next) => {
    try {
        const book = new Book(req.body.book);
        await book.save();    
        return res.status(201).send({ book });
    } catch (e) {
        next(e)
    }
});

// HTTP GET /books/:id
app.get('/books/:id', async (req, res) => {
    try {
        const book = await Book.findById(req.params.id);
        
        if (!book) return res.status(404).send({ error: 'Not Found' });
        
        return res.send({ book });
    } catch (e) {
           next(e);
    }
});

// HTTP PATCH /books/:id
app.patch('/books/:id', async (req, res, next) => {
    const { id } = req.params;
    const { updates } = req.body;
    
    try {
        const updatedBook = await Book.findByIdAndUpdate(id, updates, { runValidators: true, new: true });
        
        if (!updatedBook) return res.status(404).send({ error: 'Not Found' });
        
        return res.send({ book: updatedBook });
    } catch (e) {
        next(e);
    }
});

// HTTP DELETE /books/:id
app.delete('/books/:id', async (req, res, next) => {
    try {
        const deletedBook = await  Book.findByIdAndDelete(req.params.id);
        
        if (!deletedBook) return res.status(404).send({  error:  'Not Found' });
        
        return res.send({ book: deletedBook });
    } catch (e) {
        next(e);
    }
});

// Notice - bottom of stack.
app.use((err, req, res, next) => {
    if (err instanceof mongoose.Error.ValidationError) {
        return res.status(400).send({  error:  'Validation Error' });
    } else if (err instanceof mongoose.Error.CastError) {
        return res.status(400).send({  error:  'Not a valid ID' });
    } else {
        console.log(err); // Unexpected, so worth logging.
        return res.status(500).send({  error:  'Internal error' });
    }
});

// Binding our application to port 3000.
app.listen(PORT, () => console.log(`Server is up on port ${PORT}.`));

Moving further, it would be worth separating our error handling middleware into another file, but that’s trivial, and we’ll see it in future articles in this series. Additionally, we could use an NPM module named express-async-errors as to permit us to not have to call next in the catch block, but again, I’m trying to show you how things are done officially.

A Word About CORS And The Same Origin Policy

Suppose your website is served from the domain myWebsite.com but your server is at myOtherDomain.com/api. CORS stands for Cross-Origin Resource Sharing and is a mechanism by which cross-domain requests can be performed. In the case above, since the server and front-end JS code are at different domains, you’d be making a request across two different origins, which is commonly restricted by the browser for security reasons, and mitigated by supplying specific HTTP headers.

The Same Origin Policy is what performs those aforementioned restrictions — a web browser will only permit requires to be made across the same origin.

We’ll touch on CORS and SOP later when we build a Webpack bundled front-end for our Book API with React.

Conclusion And What’s Next

We have discussed a lot in this article. Perhaps it wasn’t all fully practical, but it hopefully got you more comfortable working with Express and ES6 JavaScript features. If you are new to programming and Node is the first path down which you are embarking, hopefully the references to statically types languages like Java, C++, and C# helped to highlight some of the differences between JavaScript and its static counterparts.

Next time, we’ll finish building out our Book API by making some fixes to our current setup with regards to the Book Routes, as well as adding in User Authentication so that users can own books. We’ll do all of this with a similar architecture to what I described here and with MongoDB for data persistence. Finally, we’ll permit users to upload avatar images to AWS S3 via Buffers.

In the article thereafter, we’ll be rebuilding our application from the ground up in TypeScript, still with Express. We’ll also move to PostgreSQL with Knex instead of MongoDB with Mongoose as to depict better architectural practices. Finally, we’ll update our avatar image uploading process to use Node Streams (we’ll discuss Writable, Readable, Duplex, and Transform Streams). Along the way, we’ll cover a great amount of design and architectural patterns and functional paradigms, including:

  • Controllers/Controller Actions
  • Services
  • Repositories
  • Data Mapping
  • The Adapter Pattern
  • The Factory Pattern
  • The Delegation Pattern
  • OOP Principles and Composition vs Inheritance
  • Inversion of Control via Dependency Injection
  • SOLID Principles
  • Coding against interfaces
  • Data Transfer Objects
  • Domain Models and Domain Entities
  • Either Monads
  • Validation
  • Decorators
  • Logging and Logging Levels
  • Unit Tests, Integration Tests (E2E), and Mutation Tests
  • The Structured Query Language
  • Relations
  • HTTP/Express Security Best Practices
  • Node Best Practices
  • OWASP Security Best Practices
  • And more.

Using that new architecture, in the article after that, we’ll write Unit, Integration, and Mutation tests, aiming for close to 100 percent testing coverage, and we’ll finally discuss setting up a remote CI/CD pipeline with CircleCI, as well as Message Busses, Job/Task Scheduling, and load balancing/reverse proxying.

Hopefully, this article has been helpful, and if you have any queries or concerns, let me know in the comments below.

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Getting Started With An Express And ES6+ JavaScript Stack

Collective #567

dreamt up by webguru in Uncategorized | Comments Off on Collective #567



Our Sponsor

Black Friday Is Coming

Not only do you get the best deal ever on Divi memberships and upgrades, but you can also win a Mac Pro worth over $6,000!

Enter now




Pika Registry

Pika is a new kind of package registry and code editor for package authors. Open for early access.

Check it out












Tetris & Snake

Can you play Tetris and Snake at the same time? Try it in this cool experiment by Grégoire Divaret-Chauveau.

Check it out


LegraJS

Legra is a small JavaScript library that lets you draw LEGO like brick shapes on an HTML canvas element.

Check it out




Collective #567 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #567

How To Use FOMO To Increase Conversions

dreamt up by webguru in Uncategorized | Comments Off on How To Use FOMO To Increase Conversions

How To Use FOMO To Increase Conversions

How To Use FOMO To Increase Conversions

Suzanne Scacca



Consumers are motivated by need and desire. And sometimes, just sometimes, they’re motivated by FOMO. That’s right: we can now add the ‘Fear Of Missing Out’ to the list of drivers that get consumers onto our websites and into our apps.

With that said, when we take a closer look at what FOMO really means and the negative impact it can have on consumers, is it something we really want to be encouraging as we build digital experiences for them? My answer to that is:

Yes, but you must use FOMO responsibly.

FOMO can be a really effective tool to add to a marketing and sales strategy. As a web designer, though, you need to find ethical ways to appeal to your users’ fear of missing out. Today, I’m going to show you some options for doing this.

A More Ethical Way To Design with FOMO

FOMO stands for “fear of missing out”, and while it might seem like some innocuous acronym like YOLO or LMAO, this isn’t a cute way of saying “Wish I were there!”.

The fear part of FOMO is all too real.

A 2013 study titled “Motivational, emotional, and behavioral correlates of fear of missing out” defined FOMO as:

A pervasive apprehension that others might be having rewarding experiences from which one is absent, FoMO is characterized by the desire to stay continually connected with what others are doing.

One of the conclusions from the report was that “FoMO was associated with lower need satisfaction, mood, and life satisfaction.”

It’s not just scientists taking note of the negative effects of FOMO in marketing, social media or otherwise. The Competition and Markets Authority went after hotel booking sites for using misleadingly urgent and deceptive discount marketing messages to increase sales.

Even without the fear of retribution from some standards authority, you really need to think about how your web and mobile apps leave your users feeling. A little bit of envy might be fine, but once the general sentiment trickles over to jealousy, disappointment or stress, it’s time to reassess what you’re doing and why.

Let’s take a look at some ways you can leverage the underlying concept of “missing out” and strip away the fear elements.

Quick note: All of the examples below are from mobile apps, however, you can use these design principles on websites and PWAs as well.

Gently Remind Visitors About Limited Availability

There’s nothing wrong with presenting limits to your users on what’s available or for how long it will remain available. It only becomes a problem when how you convey this sense of urgency or limitation causes stressful decision-making.

This is something I talked about in a recent post, “How to Stop Analysis Paralysis with Web Design”.

Basically, when you induce stress in your visitors or consumers, it makes the decision-making process more difficult and can lead to regretful purchases or no purchases at all. In that last article, the focus was on the drawbacks of presenting customers with too many choices.

However, the same kind of response (i.e. dissatisfaction and overwhelm) can happen when you put pressure on them to make a choice on the spot.

So, instead of displaying a large timer counting down the minutes left to buy items in their shopping cart or a bright red banner that screams “24-Hour Sale!”, use more gentle reminders around the site or app.

Best Buy has an entire section on its product pages dedicated to in-store and online availability:

Best Buy out of stock

Best Buy lets customers know when products are out of stock in store and online. (Source: Best Buy) (Large preview)

Now, if this were a product with only one color or memory option, I’d suggest removing it from the online inventory altogether. If you can’t provide a date when the product will become available again or put customers on a waitlist, don’t bother teasing them with an out-of-stock listing.

That said, this item has multiple variations, which makes the “sold out” notice quite potent.

Paul Messinger, a professor of business and researcher at the University of Alberta, commented on this phenomenon:

Sold-out products create a sense of immediacy for customers; they feel that if one product is gone, the next item could also sell out. Our research shows there’s also an information cascade, where people infer that if a product is sold out, it must have been good and therefore a similar available product will also be desirable.

What’s also nice about displaying sold-out products is that it reduces the number of choices consumers have to make. Granted, some may be unhappy because the silver phone they wanted is unavailable, but, as Messinger says, this limitation on what they can buy might encourage them to try another variation of the product.

One of my absolute favorite examples of gently nudging consumers to use or buy your products is Hulu:

Hulu app expiring content

The Hulu app has an entire tab dedicated to “Expiring” content. (Source: Hulu) (Large preview)

There is an entire tab in the app that lets users know which content is about to expire.

For those of you who stream content like a maniac (like myself), you know how easy it is to lose track of shows and movies you’ve added to your list. You also know how hard it can be to find the perfect thing to watch when you have dozens of options sitting in your queue, especially if you use more than one streaming service.

That’s why this “Expiring” tab is brilliant. The second I see it, I think, “Either use it or lose it, Suzanne” — which is incredibly motivating. Also, the fact that I have a much shorter list to work with helps me get to a decision more quickly.

This would be useful for e-commerce websites, for sure. If you have products that are low in inventory, give them a dedicated space for shoppers to peruse — kind of like a bargain bin without the bargain.

If your website runs a number of offers simultaneously, you could use a similar approach as well. Create a page for “Offers” or “Rewards” and break out a separate tab that shows users all the offers that are about to expire.

Call Attention to Rewards

When selling something online — be it a subscription to a repository of plugins or a store full of products — don’t forget to enable account registration. Sure, it’s a nice touch for users that want the convenience of saving account details so they don’t have to input them with each new purchase. There’s another reason to encourage your users to register though:

So you have a way to call attention to their spendable rewards.

FOMO isn’t always the fear of missing out on what others are doing. Sometimes it’s just a fear of missing the chance to get a really good deal. Promoting attractive sales offers (“75% off everything in store!”) is one way to do that, but, again, you have to recognize that that’s only going to stir up issues caused by the paradox of choice.

A softer but still effective way to compel users to buy sooner rather than later is to show off their rewards totals or expiration dates.

As a Gap customer, this is one of my favorite things about shopping with them. Whether I’m in store, on the app, shopping through the website or looking through my email, I receive these kinds of reminders:

Gap rewards reminder

Gap reminds logged in users when they have rewards to spend. (Source: Gap) (Large preview)

The “Redeem your Super Cash” reminder is the first thing I see when I log in. Even if I’ve gone to the app with the intention of just window browsing, that rewards reminder (and the impending expiration) almost always motivates me to buy something so I don’t lose my member perks.

Unlike sales banners that promote generic offers, this approach works really well because you’re appealing to loyal customers — the ones who’ve already signed up for an account and have a history of buying from you.

And if you’re worried about a banner of that size taking up too much space in your app or mobile website, think again:

Gap pulsing blue rewards reminder

Gap uses a pulsing blue ticker in the top-left corner to remind users about unspent rewards. (Source: Gap) (Large preview)

Gap doesn’t continually show the rewards reminder.

See the icon in the top left corner with the circle over it? That circle is pulsing. It’s there to let customers know that there’s something to look at before they check out. And that something are the rewards they need to spend before they lose them.

Hotels.com, on the other hand, dedicates an entire page to rewards:

Hotels.com Rewards tab

Hotels.com users can access their free night rewards on the “Rewards” tab. (Source: Hotels.com) (Large preview)

It’s similar to that urge people feel to log into social media just to check on what’s going on and to make sure they’re not missing anything. This “Rewards” tab should send a similar vibe: “Hmmm… I wonder how close I am to my free night?”

Although you can’t see it here, Hotels.com has a policy about how long customers can hold onto these earned nights before they lose them. (It’s just below this section.) By gently reminding users about this stipulation, it likely encourages its rewards members to book more trips so they can get their free night.

Encourage Sharing with Friends and Family

One of the problems with building FOMO into a website — much like any marketing you do for business — is that it’s coming from you. Until you’ve earned the trust of visitors and users, how are they supposed to believe a product marked as a “Top Seller” really is what you claim it to be? Social proof is supposed to help mitigate these kinds of concerns, but even that can be faked.

You know what I think is a more effective way to generate FOMO? Let your customers and clients do it for you.

Here’s how Airbnb does it:

Airbnb 'Invite friends' feature

Airbnb rewards its users for inviting friends. (Source: Airbnb) (Large preview)

The “Invite friends” feature encourages users to let their friends, family and colleagues know about how awesome the Airbnb experience is.

Hey, I just booked this awesome apartment in Montreal for Christmas. You’ve got to check this out! Oh yeah, you also get $40 off your first booking!

Even the headline on the landing page encourages them to share the experience; not just do it to get free travel credit (though that’s a nice incentive, too):

Airbnb referral program

Airbnb encourages its users to share their love of travel by rewarding them and their referrals with travel credits. (Source: Airbnb) (Large preview)

Imagine that friend who’s busy running a business and in dire need of a vacation. They receive this offer from you — a person they know and trust. Of course, their reaction is going to be, “I need to do that, too!” And with a discount code in hand, that’s a pretty strong source of motivation to get in the app and make a purchase.

You’ll find another great example of generating FOMO through your users from the 23andMe website:

 23andMe 'Share your Ancestry'

The home page for 23andMe invites users to ‘Share your Ancestry’. (Source: 23andMe) (Large preview)

For those of you who haven’t signed up for one of these genetic testing services, it’s actually pretty cool. You submit a saliva sample and they tell you what your ancestral background is (as well as how it can affect your health). But it’s more than just, “Your maternal family originates from Turkey.” It gets super-specific on what parts of the world your ancestors are from.

Notice that banner in the screenshot above that says “Share your Ancestry”? That’s where users find auto-generated social posts that are designed to be share-worthy (they look like Facebook and Instagram Story cards):

23andMe social sharing

23andMe auto-generates social posts users can share with their friends and followers. (Source: 23andMe) (Large preview)

This is my ancestral breakdown according to 23andMe. So, let’s say I want to joke about how boringly anti-nomadic my ancestors were on Twitter. I could edit the banner or share it as is. And guess what? That’s free advertising for 23andMe, even if I chose to ditch the logo they placed at the bottom of the file.

As those posts reach social media connections — those that know the user or those that are only acquainted with them online — FOMO starts to rear its head. “Oooh! I really want one of these! Where’d you find this out?”

With this kind of FOMO marketing on your site or app, you can stop relying so much on heavily-discounted sales events and other urgency-inducing tactics (which will cost you more in the long run). Instead, let your users generate that intensified interest.

Use More Grounded Photos and Designs

You’ve no doubt heard about lifestyle influencers using shady promotional tactics to increase sales.

One of the most well-known examples of this is the Fyre Festival, which created a bunch of buzz on social media thanks to promotional videos of celebrities and supermodels partying it up in the Caribbean. The people behind this failed festival didn’t care about the experience. They focused solely on the image of it and consumers ate it up with a spoon — until they realized that image was a lie once they got there.

Then, you have micro-influencers who try to make money from affiliate sales. However, all is usually not what it seems as Jordan Bunker explained to The Guardian:

All isn’t how it is perceived on Instagram. People assume I have a great life and everything is handed to me. I live with my parents and I work from a desk in my room; it’s not like I have a separate working space or office.

That’s not the only deception. Influencers often make their luxurious lives seem like something that’s easy to achieve. The reality, however, is that many of them have to work really hard to stage their life, every second of every day, hoping to get the perfect shot that will make consumers want to follow them or buy the stuff they promote.

But as Lucie Greene, an analyst who specializes in consumer behavior, pointed out:

We’re seeing a rising awareness of how social media use and influencer culture affects mental health, from Fomo (Fear of Missing Out) to driving compulsive, addictive consumption, to feelings of isolation.

Granted, the messages alone that influencers send to followers are often problematic. But so, too, are the images. So, as you design your website and integrate photos from your clients or from stock photo sites, think about what message you’re really sending.

Sephora, for instance, promotes its products with photos of the actual products. You might see a model or two on the top of the home page. For the most part, though, the focus is on the products.

That said, cosmetics and other beauty products can be used to convey a certain image and lifestyle — one that consumers desperately want. So, is Sephora missing out on an opportunity to create a “Sephora Lifestyle” by not photographing models using its products?

Sephora Inspire community

Sephora lets its users’ photos inspire the right kind of FOMO. (Source: Sephora) (Large preview)

Unlike many other retailers who might share photos of models living their lives in some far-off, exotic locale while wearing their products, Sephora doesn’t do that. The only time you really see photos of its cosmetics and products in action is here, in its “Inspire” community.

So, rather than leave its customers pining for some life that they may unconsciously associate with the red lipstick they were thinking of picking up, real customers get the chance to paint a more realistic portrait of its products.

Sephora Inspire gallery

A gallery of product photos from the Sephora Inspire page. (Source: Sephora) (Large preview)

As consumers grow weary of artificially enhanced photos and scenarios, you’re going to find it harder to make them feel like they’re missing out. However, by allowing your customers to provide a real look at what your products can do (and this goes for any kind of product, physical or digital), that’s where you’ll start to see consumers responding to feelings of missing out.

Before I wrap up here, I want to point out that this isn’t just for companies that sell affordable products.

The Inner Circle, for example, is an exclusive dating app. In order to join, users must first be prescreened and approved.

Now, you might think that a luxury brand like that would want to use influencer-like photos to show users how much they’re missing out by not dating in their “class”. But they don’t.

The Inner Circle dating app with bar photo

The Inner Circle luxury dating app doesn’t focus on the luxurious side of dating. (Source: The Inner Circle) (Large preview)

In this first example from the app’s signup page, you can see that the focus is on finding a popular spot to hang out and meet people. While the black-and-white filter does give it a swankier vibe, there’s nothing about the people in the photo that necessarily screams “Exclusive!”

The same thing goes for this photo:

The Inner Circle dating app with date in the park photo

The Inner Circle paints dating in a positive and natural light. (Source: The Inner Circle) (Large preview)

This is the kind of date most people would go on: a date in the park. The people in the photo aren’t all glammed up or wearing clothes made by high-end luxury designers.

These photos feel accessible. They let users know that, at the end of the day, they’re using this app to make real-life connections. There’s nothing exclusive about that.

And if a luxury brand like The Inner Circle can send that kind of message to its users with photos, then any brand should be able to do the same and be successful with it. Just be honest in what you’re portraying, whether it’s a photo of someone cooking with your products or a look inside the real (not illustrated) dashboard of your SaaS.

If you want to give prospects the feeling that they’re about to miss out on something worthwhile, just be real with them.

Wrapping Up

Maybe not today and maybe not tomorrow, but deceptive FOMO tactics will eventually catch up with you when customers start to realize they were misled by inflated numbers, exaggerated scenarios or seemingly time-sensitive or exclusive offers.

Remember: the websites and apps you build for clients shouldn’t just attract and convert customers. They also need to help your clients retain that business and loyalty over the long term. By being more responsible with the messages you’re sending, you can help them accomplish that.

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, How To Use FOMO To Increase Conversions

Performing iOS Animations On Views With UIKit And UIView

dreamt up by webguru in Uncategorized | Comments Off on Performing iOS Animations On Views With UIKit And UIView

Performing iOS Animations On Views With UIKit And UIView

Performing iOS Animations On Views With UIKit And UIView

Saravanan V



I have been an iOS developer for over a decade now and have rarely seen articles that consolidate all possible ways to perform animations in iOS. This article aims to be a primer on iOS animations with the intent of exhaustively covering the different ways of doing the same.

Given the extensiveness of the topic, we would cover each part succinctly at a fairly high level. The goal is to educate the reader with a set of choices to add animations to his/ her iOS app.

Before we start off with topics related to iOS, let us take a brief look at animation speed.

Animating At 60FPS

Generally in videos, each frame is represented by an image and the frame rate determines the number of images flipped in the sequence. This is termed as ‘frames per second’ or FPS.

FPS determines the number of still images flipped within a second, which literally means that the more the number of images/ frames, more details/ information are displayed in the video. This holds true for animations as well.

FPS is typically used to determine the quality of animations. There is a popular opinion that any good animation should run at 60fps or higher — anything less than 60fps would feel a bit off.

Do you want to see the difference between 30FPS and 60FPS? Check this!

Did you notice the difference? Human eyes can definitely feel the jitter at lower fps. Hence, it is always a good practice to make sure that any animation you create, adheres to the ground rule of running at 60FPS or higher. This makes it feel more realistic and alive.

Having looked at FPS, let’s now delve into the different core iOS frameworks that provide us a way to perform animations.

Core Frameworks

In this section, we will touch upon the frameworks in the iOS SDK which can be used for creating view animations. We will do a quick walk through each of them, explaining their feature set with a relevant example.

UIKit/ UIView Animations

UIView is the base class for any view that displays content in iOS apps.

UIKit, the framework that gives us UIView, already provides us some basic animation functions which make it convenient for developers to achieve more by doing less.

The API, UIView.animate, is the easiest way to animate views since any view’s properties can be easily animated by providing the property values in the block-based syntax.

In UIKit animations, it is recommended to modify only the animatable properties of UIVIew else there will be repercussions where the animations might cause the view to end up in an unexpected state.

animation(withDuration: animations: completion)

This method takes in the animation duration, a set of view’s animatable property changes that need to be animated. The completion block gives a callback when the view is done with performing the animation.

Almost any kind of animation like moving, scaling, rotating, fading, etc. on a view can be achieved with this single API.

Now, consider that you want to animate a button size change or you want a particular view to zoom into the screen. This is how we can do it using the UIView.animate API:

let newButtonWidth: CGFloat = 60

UIView.animate(withDuration: 2.0) { //1
    self.button.frame = CGRect(x: 0, y: 0, width: newButtonWidth, height: newButtonWidth) //2
    self.button.center = self.view.center //3
}

Here’s what we are doing here:

  1. We call the UIView.animate method with a duration value passed to it that represents how long the animation, described inside the block, should run.
  2. We set the new frame of the button that should represent the final state of the animation.
  3. We set the button center with its superview’s center so that it remains at the center of the screen.

The above block of animation code should trigger the animation of the button’s frame changing from current frame:

Width = 0, Height = 0

To the final frame:

Width = Height = newButtonWidth

And here’s what the animation would look like:

animateWithDuration:delay:usingSpringWithDamping:initialSpringVelocity:options:animations:completion

This method is like an extension of the animate method where you can do everything that you can perform in the prior API with some physics behaviors added to the view animations.

For example, if you want to achieve spring damping effects in the animation that we have done above, then this is how the code would look like:

let newButtonWidth: CGFloat = 60
UIView.animate(withDuration: 1.0, //1
    delay: 0.0, //2
    usingSpringWithDamping: 0.3, //3
    initialSpringVelocity: 1, //4
    options: UIView.AnimationOptions.curveEaseInOut, //5
    animations: ({ //6
        self.button.frame = CGRect(x: 0, y: 0, width: newButtonWidth, height: newButtonWidth)
        self.button.center = self.view.center
}), completion: nil)

Here’s the set of parameters we use:

  1. duration
    Represents the duration of the animation determining how long the block of code should run.
  2. delay
    Represents the initial delay that we want to have before the start of the animation.
  3. SpringWithDamping
    Represents the value of the springy effect that we want the view to behave. The value must be between 0 to 1. The lower the value, the higher the spring oscillation.
  4. velocity
    Represents the speed at which the animation should start.
  5. options
    Type of animation curve that you want to apply to your view animation.
  6. Finally, the block of code where we set the frame of the button that needs to be animated. It is the same as the previous animation.

And here’s what the animation would look like with the above animation configuration:

UIViewPropertyAnimator

For a bit more control over animations, UIViewPropertyAnimator comes handy where it provides us a way to pause and resume animations. You can have custom timing and have your animation to be interactive and interruptible. This is very much helpful when performing animations that are also interactable with user actions.

The classic ‘Slide to Unlock’ gesture and the player view dismiss/ expand animation (in the Music app) are examples of interactive and interruptible animations. You can start moving a view with your finger, then release it and the view will go back to its original position. Alternatively, you can catch the view during the animation and continue dragging it with your finger.

Following is a simple example of how we could achieve the animation using UIViewPropertyAnimator:

let newButtonWidth: CGFloat = 60
let animator = UIViewPropertyAnimator(duration:0.3, curve: .linear) { //1
    self.button.frame = CGRect(x: 0, y: 0, width: newButtonWidth, height: newButtonWidth)
    self.button.center = self.view.center
}
animator.startAnimation() //2

Here’s what we are doing:

  1. We call the UIViewProperty API by passing the duration and the animation curve.
  2. Unlike both the above UIView.animate API’s, the animation won’t start unless you specify it by yourself i.e. you’re in full control of the complete animation process/ flow.

Now, let’s say that you want even more control over the animations. For example, you want to design and control each and every frame in the animation. There’s another API for that, animateKeyframes. But before we delve into it, let’s quickly look at what a frame is, in an animation.

What Is A frame?

A collection of the view’s frame changes/ transitions, from the start state to the final state, is defined as animation and each position of the view during the animation is called as a frame.

animateKeyframes

This API provides a way to design the animation in such a way that you can define multiple animations with different timings and transitions. Post this, the API simply integrates all the animations into one seamless experience.

Let’s say that we want to move our button on the screen in a random fashion. Let’s see how we can use the keyframe animation API to do so.

UIView.animateKeyframes(withDuration: 5, //1
  delay: 0, //2
  options: .calculationModeLinear, //3
  animations: { //4
    UIView.addKeyframe( //5
      withRelativeStartTime: 0.25, //6
      relativeDuration: 0.25) { //7
        self.button.center = CGPoint(x: self.view.bounds.midX, y: self.view.bounds.maxY) //8
    }

    UIView.addKeyframe(withRelativeStartTime: 0.5, relativeDuration: 0.25) {
        self.button.center = CGPoint(x: self.view.bounds.width, y: start.y)
    }

    UIView.addKeyframe(withRelativeStartTime: 0.75, relativeDuration: 0.25) {
        self.button.center = start
    }
})

Here’s the breakdown:

  1. duration
    Call the API by passing in the duration of the animation.
  2. delay
    Initial delay duration of the animation.
  3. options
    The type of animation curve that you want to apply to your view animation.
  4. animations
    Block that takes all keyframe animations designed by the developer/ user.
  5. addKeyFrame
    Call the API to design each and every animation. In our case, we have defined each move of the button. We can have as many such animations as we need, added to the block.
  6. relativeStartTime
    Defines the start time of the animation in the collection of the animation block.
  7. relativeDuration
    Defines the overall duration of this specific animation.
  8. center
    In our case, we simply change the center property of the button to move the button around the screen.

And this is how the final animations looks like:

CoreAnimation

Any UIKit based animation is internally translated into core animations. Thus, the Core Animation framework acts as a backing layer or backbone for any UIKit animation. Hence, all UIKit animation APIs are nothing but encapsulated layers of the core animation APIs in an easily consumable or convenient fashion.

UIKit animation APIs don’t provide much control over animations that have been performed over a view since they are used mostly for animatable properties of the view. Hence in such cases, where you intend to have control over every frame of the animation, it is better to use the underlying core animation APIs directly. Alternatively, both the UIView animations and core animations can be used in conjunction as well.

UIView + Core Animation

Let’s see how we can recreate the same button change animation along with specifying the timing curve using the UIView and Core Animation APIs.

We can use CATransaction’s timing functions, which lets you specify and control the animation curve.

Let’s look at an example of a button size change animation with its corner radius utilizing the CATransaction’s timing function and a combination of UIView animations:

let oldValue = button.frame.width/2
let newButtonWidth: CGFloat = 60

/* Do Animations */
CATransaction.begin() //1
CATransaction.setAnimationDuration(2.0) //2
CATransaction.setAnimationTimingFunction(CAMediaTimingFunction(name: CAMediaTimingFunctionName.easeInEaseOut)) //3

// View animations //4
UIView.animate(withDuration: 1.0) {
    self.button.frame = CGRect(x: 0, y: 0, width: newButtonWidth, height: newButtonWidth)
    self.button.center = self.view.center
}

// Layer animations
let cornerAnimation = CABasicAnimation(keyPath: #keyPath(CALayer.cornerRadius)) //5
cornerAnimation.fromValue = oldValue //6
cornerAnimation.toValue = newButtonWidth/2 //7

button.layer.cornerRadius = newButtonWidth/2 //8
button.layer.add(cornerAnimation, forKey: #keyPath(CALayer.cornerRadius)) //9

CATransaction.commit() //10

Here’s the breakdown:

  1. begin
    Represents the start of the animation code block.
  2. duration
    Overall animation duration.
  3. curve
    Represents the timing curve that needs to be applied to the animation.
  4. UIView.animate
    Our first animation to change the frame of the button.
  5. CABasicAnimation
    We create the CABasicAnimation object by referring the cornerRadius of the button as the keypath since that’s what we want to animate. Similarly, if you want to have granular level control over the keyframe animations, then you can use the CAKeyframeAnimation class.
  6. fromValue
    Represents the starting value of the animation, i.e. the initial cornerRadius value of the button from where the animation must start off.
  7. toValue
    Represents the final value of the animation, i.e. the final cornerRadius value of the button where the animation must end.
  8. cornerRadius
    We must set the cornerRadius property of the button with the final value of the animation else the button’s cornerRadius value will get auto-reverted to its initial value after the animation completes.
  9. addAnimation
    We attach the animation object that contains the configuration of the entire animation process to the layer by representing the Keypath for which the animation needs to be performed.
  10. commit
    Represents the end of the animation code block and starts off the animation.

This is how the final animation would look like:

This blog is a great read to help create more advanced animations as it neatly walks you through most of the Core Animation framework APIs with instructions guiding you through every step of the way.

UIKitDynamics

UIKit Dynamics is the physics engine for UIKit which enables you to add any physics behaviors like collision, gravity, push, snap, etc, to the UIKit controls.

UIKitDynamicAnimator

This is the admin class of the UIKit Dynamics framework that regulates all animations triggered by any given UI control.

UIKitDynamicBehavior

It enables you to add any physics behavior to an animator which then enables it to perform on the view attached to it.

Different kinds of behaviors for UIKitDynamics include:

  • UIAttachmentBehavior
  • UICollisionBehavior
  • UIFieldBehavior
  • UIGravityBehavior
  • UIPushBehavior
  • UISnapBehavior

The architecture of UIKitDynamics looks something like this. Note that Items 1 to 5 can be replaced with a single view.

Let us apply some physics behavior to our button. We will see how to apply gravity to the button so that it gives us a feeling of dealing with a real object.

var dynamicAnimator   : UIDynamicAnimator!
var gravityBehavior   : UIGravityBehavior!

dynamicAnimator = UIDynamicAnimator(referenceView: self.view) //1

gravityBehavior = UIGravityBehavior(items: ) //2
dynamicAnimator.addBehavior(gravityBehavior) //3

Here’s the breakdown:

  1. UIKitDynamicAnimator
    We have created a UIKitDynamicAnimator object which acts as an orchestrator for performing animations. We have also passed the superview of our button as the reference view.
  2. UIGravityBehavior
    We have created a UIGravityBehavior object and pass our button into the array elements on which this behavior is injected.
  3. addBehavior
    We have added the gravity object to the animator.

    This should create an animation as shown below:

    Notice how the button falls off from the center (its original position) of the screen to the bottom and beyond.

    We should tell the animator to consider the bottom of the screen to be the ground. This is where UICollisionBehavior comes into picture.

    var dynamicAnimator   : UIDynamicAnimator!
    var gravityBehavior   : UIGravityBehavior!
    var collisionBehavior : UICollisionBehavior!
    
    dynamicAnimator = UIDynamicAnimator(referenceView: self.view) //1
    
    gravityBehavior = UIGravityBehavior(items: ) //2
    dynamicAnimator.addBehavior(gravityBehavior) //3
    
    collisionBehavior = UICollisionBehavior(items: ) //4
    collisionBehavior.translatesReferenceBoundsIntoBoundary = true //5
    dynamicAnimator.addBehavior(collisionBehavior) //6
  4. UICollisionBehavior
    We have created a UICollisionBehavior object and passed along the button so that the behavior is added to the element.
  5. translatesReferenceBoundsIntoBoundary
    Enabling this property tells the animator to take the reference views boundary as the end, which is the bottom of the screen in our case.
  6. addBehavior
    We have added collision behavior to the animator here.

    Now, our button should hit the ground and stand still as shown below:

    That’s pretty neat, isn’t it?

    Now, let us try adding a bouncing effect so that our object feels more real. To do that, we will use the UIDynamicItemBehavior class.

    var dynamicAnimator   : UIDynamicAnimator!
    var gravityBehavior   : UIGravityBehavior!
    var collisionBehavior : UICollisionBehavior!
    var bouncingBehavior  : UIDynamicItemBehavior!
    
    dynamicAnimator = UIDynamicAnimator(referenceView: self.view) //1
    
    gravityBehavior = UIGravityBehavior(items: ) //2
    dynamicAnimator.addBehavior(gravityBehavior) //3
    
    collisionBehavior = UICollisionBehavior(items: ) //4
    collisionBehavior.translatesReferenceBoundsIntoBoundary = true //5
    dynamicAnimator.addBehavior(collisionBehavior) //6
    
    //Adding the bounce effect
    bouncingBehavior = UIDynamicItemBehavior(items: ) //7
    bouncingBehavior.elasticity = 0.75 //8
    dynamicAnimator.addBehavior(bouncingBehavior) //9
  7. UIDynamicItemBehavior
    We have created a UIDynamicItemBehavior object and pass along the button so that the behavior is added to the element.
  8. elasticity
    Value must be between 0-1, it represents the elasticity i.e. the number of times the object must bounce on and off the ground when it is hit. This is where the magic happens — by tweaking this property, you can differentiate between different kinds of objects like balls, bottles, hard-objects and so on.
  9. addBehavior
    We have added collision behavior to the animator here.

Now, our button should bounce when it hits the ground as shown below:

This repo is quite helpful and shows all UIKitDynamics behaviors in action. It also provides source code to play around with each behavior. That, in my opinion, should serve as an extensive list of ways to perform iOS animations on views!

In the next section, we will take a brief look into the tools that will aid us in measuring the performance of animations. I would also recommend you to look at ways to optimize your Xcode build since it will save a huge amount of your development time.

Performance Tuning

In this section, we will look at ways to measure and tune the performance of iOS animations. As an iOS developer, you might have already used Xcode Instruments like Memory Leaks and Allocations for measuring the performance of the overall app. Similarly, there are instruments that can be used to measure the performance of animations.

Core Animation Instrument

Try the Core Animation instrument and you should be able to see the FPS that your app screen delivers. This is a great way to measure the performance/ speed of any animation rendered in your iOS app.

Drawing

FPS is vastly lowered in the app that displays heavy content like images with effects like shadows. In such cases, instead of assigning the Image directly to the UIImageView’s image property, try to draw the image separately in a context using Core Graphics APIs. This overly reduces the image display time by performing the image decompression logic asynchronously when done in a separate thread instead of the main thread.

Rasterization

Rasterization is a process used to cache complex layer information so that these views aren’t redrawn whenever they’re rendered. Redrawing of views is the major cause of the reduction in FPS and hence, it is best to apply rasterization on views that are going to be reused several times.

Wrapping Up

To conclude, I have also summed up a list of useful resources for iOS animations. You may find this very handy when working on iOS animations. Additionally, you may also find this set of design tools helpful as a (design) step before delving into animations.

I hope I have been able to cover as many topics as possible surrounding iOS animations. If there is anything I may have missed out in this article, please let me know in the comments section below and I would be glad to make the addition!

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Performing iOS Animations On Views With UIKit And UIView

Make Your Own Expanding And Contracting Content Panels

dreamt up by webguru in Uncategorized | Comments Off on Make Your Own Expanding And Contracting Content Panels

Make Your Own Expanding And Contracting Content Panels

Make Your Own Expanding And Contracting Content Panels

Ben Frain



We’ve called them an ‘opening and closing panel’ so far, but they are also described as expansion panels, or more simply, expanding panels.

To clarify exactly what we’re talking about, head on over to this example on CodePen:

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

That is what we’ll be building in this short tutorial.

From a functionality point of view, there are a few ways to achieve the animated open and close that we are looking for. Each approach with its own benefits and trade-offs. I’m going to share the details of my ‘go-to’ method in detail in this article. Let’s consider possible approaches first.

Approaches

There are variations on these techniques, but broadly speaking, the approaches fall into one of three categories:

  1. Animate/transition the height or max-height of content.
  2. Use transform: translateY to move elements into a new position, giving the illusion of a panel closing and then re-render the DOM once the transform is complete with the elements in their finishing position.
  3. Use a library that does some combination/variation of 1 or 2!

Considerations Of Each Approach

From a performance perspective, using a transform is more effective than animating or transitioning the height/max-height. With a transform, the moving elements are rasterized and get shifted around by the GPU. This is a cheap and easy operation for a GPU so performance tends to be much better.

The basic steps when using a transform approach are:

  1. Get the height of the content to be collapsed.
  2. Move the content and everything after by the height of the content to be collapsed using transform: translateY(Xpx). Operate the transform with the transition of choice to give a pleasing visual effect.
  3. Use JavaScript to listen to the transitionend event. When it fires, display: none the content and remove the transform and everything should be in the right place.

Doesn’t sound too bad, right?

However, there are a number of considerations with this technique so I tend to avoid it for casual implementations unless performance is absolutely crucial.

For example, with the transform: translateY approach you need to consider the z-index of the elements. By default, the elements that transform up are after the trigger element in the DOM and therefore appear on-top of the things before them when translated up.

You also need to consider how many things appear after the content you want to collapse in the DOM. If you don’t want a big hole in your layout, you might find it easier to use JavaScript to wrap everything you want to move in a container element and just move that. Manageable but we have just introduced more complexity! This is, however, the kind of approach I went for when moving players up and down in In/Out. You can see how that was done here.

For more casual needs, I tend to go with transitioning the max-height of the content. This approach doesn’t perform as well as a transform. The reason being that the browser is tweening the height of the collapsing element throughout the transition; that causes a lot of layout calculations which are not as cheap for the host computer.

However, this approach wins from a simplicity point of view. The pay-off of suffering the afore-mentioned computational hit is that the DOM re-flow takes care of the position and geometry of everything. We have very little in the way of calculations to write plus the JavaScript needed to pull it off well is comparatively simple.

The Elephant In The Room: Details And Summary Elements

Those with an intimate knowledge of HTML’s elements will know there is a native HTML solution to this problem in the form of the details and summary elements. Here’s some example markup:

<details>
    <summary>Click to open/close</summary>
    Here is the content that is revealed when clicking the summary...
</details>

By default, browsers provide a little disclosure triangle next to the summary element; click the summary and the contents below the summary is revealed.

Great, hey? Details even support the toggle event in JavaScript so you can do this kind of thing to perform different things based upon whether it is open or closed (don’t worry if that kind of JavaScript expression seems odd; we’ll get to that in more detail shortly):

details.addEventListener("toggle", () => {
    details.open ? thisCoolThing() : thisOtherThing();
})

OK, I’m going to halt your excitement right there. The details and summary elements don’t animate. Not by default and it is not currently possible to get them animating/transitioning open and closed with additional CSS and JavaScript.

If you know otherwise, I’d love to be proved wrong.

Sadly, as we need an opening and closing aesthetic we’ll have to roll up our sleeves and do the best and most accessible job we can with the other tools at our disposal.

Right, with the depressing news out of the way, let’s get on with making this thing happen.

Markup Pattern

The basic markup is going to look like this:

All the content here
</div>

We have an outer container to wrap the expander and the first element is the button which serves as a trigger to the action. Notice the type attribute in the button? I always include that as by default a button inside a form will perform a submit. If you find yourself wasting a couple of hours wondering why your form isn’t working and buttons are involved in your form; make sure you check the type attribute!

The next element after the button is the content drawer itself; everything you want to be hiding and showing.

To bring things to life, we will make use of CSS custom properties, CSS transitions, and a little JavaScript.

Basic Logic

The basic logic is this:

  1. Let the page load, measure the height of the content.
  2. Set the height of the content onto the container as the value of a CSS Custom Property.
  3. Immediately hide the content by adding an aria-hidden: "true" attribute to it. Using aria-hidden ensures assistive technology knows that content is hidden too.
  4. Wire up the CSS so that the max-height of the content class is the value of the custom property.
  5. Pressing our trigger button toggles the aria-hidden property from true to false which in turn toggles the max-height of the content between 0 and the height set in the custom property. A transition on that property provides the visual flair — adjust to taste!

Note: Now, this would be a simple case of toggling a class or attribute if max-height: auto equalled the height of the content. Sadly it doesn’t. Go and shout about that to the W3C here.

Let’s have a look how that approach manifests in code. Numbered comments show the equivalent logic steps from above in code.

Here is the JavaScript:

// Get the containing element
const container = document.querySelector(".container");
// Get content
const content = document.querySelector(".content");
// 1. Get height of content you want to show/hide
const heightOfContent = content.getBoundingClientRect().height;
// Get the trigger element
const btn = document.querySelector(".trigger");

// 2. Set a CSS custom property with the height of content
container.style.setProperty("--containerHeight", `${heightOfContent}px`);

// Once height is read and set
setTimeout(e => {
    document.documentElement.classList.add("height-is-set");
    3. content.setAttribute("aria-hidden", "true");
}, 0);

btn.addEventListener("click", function(e) {
    container.setAttribute("data-drawer-showing", container.getAttribute("data-drawer-showing") === "true" ? "false" : "true");
    // 5. Toggle aria-hidden
    content.setAttribute("aria-hidden", content.getAttribute("aria-hidden") === "true" ? "false" : "true");
})

The CSS:

.content {
  transition: max-height 0.2s;
  overflow: hidden;
}
.content[aria-hidden="true"] {
  max-height: 0;
}
// 4. Set height to value of custom property
.content[aria-hidden="false"] {
  max-height: var(--containerHeight, 1000px);
}

Points Of Note

What about multiple drawers?

When you have a number of open-and-hide drawers on a page you’ll need to loop through them all as they will likely be differing sizes.

To handle that we will need to do a querySelectorAll to get all the containers and then re-run your setting of custom variables for each content inside a forEach.

That setTimeout

I have a setTimeout with 0 duration before setting the container to be hidden. This is arguably unneeded but I use it as a ‘belt and braces’ approach to ensure the page has rendered first so the heights for the content are available to be read.

Only fire this when the page is ready

If you have other stuff going on, you might choose to wrap your drawer code up in a function that gets initialised on page load. For example, suppose the drawer function was wrapped up in a function called initDrawers we could do this:

window.addEventListener("load", initDrawers);

In fact, we will add that in shortly.

Additional data-* attributes on the container

There is a data attribute on the outer container that also gets toggled. This is added in case there is anything that needs to change with the trigger or container as the drawer opens/closes. For example, perhaps we want to change the color of something or reveal or toggle an icon.

Default value on the custom property

There’s a default value set on the custom property in CSS of 1000px. That’s the bit after the comma inside the value: var(--containerHeight, 1000px). This means if the --containerHeight gets screwed up in some way, you should still have a decent transition. You can obviously set that to whatever is suitable to your use case.

Why Not Just Use A Default Value Of 100000px?

Given that max-height: auto doesn’t transition, you may be wondering why you don’t just opt for a set height of a value greater than you would ever need. For example, 10000000px?

The problem with that approach is that it will always transition from that height. If your transition duration is set to 1 second, the transition will ‘travel’ 10000000px in a second. If your content is only 50px high, you’ll get quite a quick opening/closing effect!

Ternary operator for toggles

We’ve made use of a ternary operator a couple of times to toggle attributes. Some folks hate them but I, and others, love them. They might seem a bit weird and a little ‘code golf’ at first but once you get used to the syntax, I think they are a more straightforward read than a standard if/else.

For the uninitiated, a ternary operator is a condensed form of if/else. They are written so that the thing to check is first, then the ? separates what to execute if the check is true, and then the : to distinguish what should run if the check if false.

isThisTrue ? doYesCode() : doNoCode();

Our attribute toggles work by checking if an attribute is set to "true" and if so, set it to "false", otherwise, set it to "true".

What happens on page resize?

If a user resizes the browser window, there’s a high probability the heights of our content will change. Therefore you might want to re-run setting the height for containers in that scenario. Now we are considering such eventualities, it seems like a good time to refactor things a little.

We can make one function to set the heights and another function to deal with the interactions. Then add two listeners on the window; one for when the document loads, as mentioned above, and then another to listen for the resize event.

All Together

With the page load, multiple drawers, and handling resize events, our JavaScript code looks like this:

var containers;
function initDrawers() {
    // Get the containing elements
    containers = document.querySelectorAll(".container");
    setHeights();
    wireUpTriggers();
    window.addEventListener("resize", setHeights);
}

window.addEventListener("load", initDrawers);

function setHeights() {
    containers.forEach(container => {
        // Get content
        let content = container.querySelector(".content");
        content.removeAttribute("aria-hidden");
        // Height of content to show/hide
        let heightOfContent = content.getBoundingClientRect().height;
        // Set a CSS custom property with the height of content
        container.style.setProperty("--containerHeight", `${heightOfContent}px`);
        // Once height is read and set
        setTimeout(e => {
            container.classList.add("height-is-set");
            content.setAttribute("aria-hidden", "true");
        }, 0);
    });
}

function wireUpTriggers() {
    containers.forEach(container => {
        // Get each trigger element
        let btn = container.querySelector(".trigger");
        // Get content
        let content = container.querySelector(".content");
        btn.addEventListener("click", function(e) {
            container.setAttribute("data-drawer-showing", container.getAttribute("data-drawer-showing") === "true" ? "false" : "true");
            content.setAttribute("aria-hidden", content.getAttribute("aria-hidden") === "true" ? "false" : "true");
        });
    });
}

You can also play with it on CodePen over here:

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

Easy show/hide drawer (Multiples) by Ben Frain on CodePen.

Summary

It’s possible to go on for some time further refining and catering for more and more situations but the basic mechanics of creating a reliable opening and closing drawer for your content should now be within your reach. Hopefully, you are also aware of some of the hazards. The details element can’t be animated, max-height: auto doesn’t do what you hoped, you can’t reliably add a massive max-height value and expect all content panels to open as expected.

To re-iterate our approach here: measure the container, store it’s height as a CSS custom property, hide the content and then use a simple toggle to switch between max-height of 0 and the height you stored in the custom property.

It might not be the absolute best performing method but I have found for most situations it is perfectly adequate and benefits from being comparatively straightforward to implement.

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Make Your Own Expanding And Contracting Content Panels

Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

dreamt up by webguru in Uncategorized | Comments Off on Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Drew McLellan



Jina Anne In this episode of the Smashing Podcast, we’re talking about Design Tokens. What are they, what problem do they solve, and how can they be used within an existing Design System? Drew McLellan talks to someone who is much more than a token expert: Jina Anne.

Show Notes

Transcript

Drew: She’s a design systems advocate and coach. While at Amazon, she was senior design systems lead and she was lead designer on the Lightning Design System at Salesforce, while at Apple she led the CSS architecture and style guide for the Apple Online Store. She’s worked with GitHub, Engine Yard, and the Memphis Brooks Museum of Art and more. She founded and organizes Clarity, the first design systems conference, and is on the Sass core team where she leads the brand design and website for Sass. When it comes to design systems, you’d be hard pushed to find anyone more qualified, but did you know that she’s never seen a sidewalk? My smashing friends, please welcome Jina Anne. Hello, Jina.

Jina Anne: Hello.

Drew: How are you?

Jina Anne: I’m smashing.

Drew: I wanted to talk to you today about design tokens, which I think is a phrase many of us have probably heard passed about, but we perhaps aren’t sure what it means. But before we get to that, I guess we should talk a little bit about design systems. I mean, design systems are your thing, right?

Jina Anne: Yeah. It rules everything around me. Yeah.

Drew: I think that there’s something that we’re seeing is becoming increasingly common in projects and people are making them public and seems to be a real movement around design systems. But I think there are plenty of organizations that don’t have them in place still. What problem does a formalized design system solve from your point of view?

Jina Anne: It can solve many problems. I think some of the more common problems that people seek to solve is around maintainability and consistency. That usually has to do with design debt or in some cases code debt, some cases both. I also look at it as a… Like, it’s not just about the code or the design, but also the problems around how people work together. So, I look at it as a way to also solve some of the issues around communication and workflow process and so on.

Drew: Are design systems then something exclusively that are useful to really big teams and big organizations?

Jina Anne: I don’t think so. I’ve seen them work really well with smaller teams or sometimes even with a lone designer. They definitely help with larger teams for sure, but they are definitely not exclusive to large teams. In fact, I think if you see yourself perhaps growing at some point to be a large team, then having the system in place already will help you do that more efficiently.

Drew: What did you think are the sort of symptoms that somebody might be looking for if they’re working and they’re still having problems? What do those problems look like that might be solved by putting a design system in place?

Jina Anne: There’s a few, duplication of efforts, duplication of code. You might have a breakdown in communication where things just aren’t being built the way they’re expected to be built. It could come down to things that aren’t documented well, so people don’t really quite know what the best thing is to use or where to look. Yeah, there are all sorts of signs.

Drew: I guess design systems are generally a concept, rather than a specific technical solution. In your work, you must see people using all sorts of different tools to achieve design systems.

Jina Anne: Yeah.

Drew: What are some of the more common ways that people actually go about it?

Jina Anne: I think the most common ways are having a component library done in code and often cases you’ll see it in it like a React library or an Angular library, whatever, platform you’re using. There’s usually also a website associated with it that will display those components. Then you’ll usually see perhaps like a Sketch or a Figma library as well.

Jina Anne: But one of the things that I like to stress to people is that if you look at that website that displays your documentation and your components, that website is not actually your design system. It’s a representation of your design system. So, I think a lot of people spend a lot of time on making this gorgeous, beautiful website and it’s fine. They’re nice to look at and they’re nice to share and they help a lot with communicating what you’re doing and even with recruiting.

Jina Anne: But it’s the system itself that it represents that I want people to spend their love and care into, so thinking through what’s going into that website, like the content and how you’ve organized things, how you’ve named things, the things that you’re systemizing, so, yeah. I think a lot of people think about the artifacts, like the deliverables, but really it’s a lot more than that. It’s a lot of process and workflow as well.

Drew: Is it exclusively web projects that the design system would help with?

Jina Anne: Not at all. It is the most common, I believe, from, at least, what I’ve seen, but design systems definitely can cover many things. In the digital space, you have native platforms, but even outside the digital space, I think a lot of people talk about design systems in a digital product space. But they’ve been around for ages for traditional medias and real-world scenarios. If you have seen the NASA graphic standards manual from like the ‘70s, that was a design system. It just was across all the different like rockets and spacesuits and all that, instead of digital products.

Drew: So, I guess, there must be some overlap between things, traditional things like brand guidelines and that sort of documentation that I think probably people are familiar with in all sorts of walks of life. There must be a crossover between that sort of documentation of a system and a more modern concept of a design system.

Jina Anne: Yeah, I believe so. I think a lot of people forget that it’s all about branding. The whole reason any of this even started and why we want to display these things in a uniform or unified way is all about the brand because brand isn’t just logos. It’s how people use and experience your company’s service or product or whatever it is that you offer. So, yeah, absolutely.

Drew: So, I’ve got a design system in place, I mean an organization. We’ve done a whole lot of work. We’ve got a design system. There are creatives within the organization working in maybe, like you mentioned, Figma or Sketch. We’ve got web designers using that in a CSS. Perhaps we’ve got a mobile team doing like Android and iOS development, building apps. Loads of people working with a design system contributing into it and consuming stuff from it. Where do design tokens come in? What problem do they solve?

Jina Anne: Ooh, yes. Let me first take it back to a story. When I first joined at Salesforce, I was actually part of a small project team. It was a different product, it’s like a productivity tool like tasks and notes and things like that. We were only three designers and I was the only one that, I guess, I wouldn’t say brave enough, but maybe interested enough to work with the Android designs. The other two designers, I think, just weren’t quite as interested. So, I was basically the main designer on our Android app. Then I also did a lot of design for iOS app and, of course, the web application as well and the marketing website, so lots of different projects in play.

Jina Anne: With the website, since I like to design and code, it was pretty straightforward. I could go ahead and build the buttons and typography and everything that we needed for the web application or the marketing website, document it in code and deliver that.

Jina Anne: However, with both the Android and iOS app, I don’t really know how to code for that and so I wasn’t able to deliver the same thing. So, I was having to do a ton of redlines specs, which, if you’re not familiar with redlines, it’s essentially where you are specking out every single spacing, font size, color, anything to indicate how to build it for the engineer. I would do these for many, many, many screens and, of course, a lot of those screens had variations because maybe you’re showing what happens when you clicked that button or when a certain state happens. So, doing this across many, many screens and then saving those up to Dropbox and then documenting it in a Wiki. That was the process that I was having to do at the time.

Jina Anne: I usually think about things in a CSS way, like especially the C in CSS, so I usually think, “Oh, well, font sizes should only need to be declared one time because it’s going to cascade everywhere.” But I found that with certain engineers that I’ve worked within the past, if you don’t spec it, and I guess with native it works a little differently, they’re not going to build it and so I would have to be very explicit and name pretty much everything per screen. I was just like, “Oh, why is it like this?” Then any time we made any changes, I had to go back through and change all those screens again. It was not fun at all.

Jina Anne: Fast forward to when I moved over to the core team of Salesforce, I had been working in the Sass website and I’ve been playing around with using a YAML file to store the data for colors, typography, spacing and so on and was looping over that data to create the style guide, as well as the Sass variables in the classes. The reason I did that was we open-sourced the Sass website and I wanted people to be able to contribute to the design as well. But I didn’t want to make it a tedious process where you had to update the style guide along with any colors that you’re adding and so doing it this way, just kind of automated that process.

Jina Anne: I showed that to the team at Salesforce and then that kind of is where the concept of design tokens spawned off of. So they built a tool called Theo and there’s other tools out now that do the same thing like Style Dictionary. But the idea of it is you have this automated tool that takes the data that you give it and generates the code. You might think, “Well, that might be over-engineering variables. Why not just use variables?”

Jina Anne: Well, the idea is, as you alluded to earlier, like native platforms just take those attributes in a totally different way and so trying to scale design to Android and iOS, whatever other platforms that get Salesforce. We had some people on Java, we had some people on React yet, some people on Angular, PHP, not just internally at Salesforce, but also externally with all our partners and customers that were building their own applications. So, this was a way to store our visual information as data and then, in an automated way, generate the variables or the XML data you needed or the JSON data, whatever format that particular platform looked for.

Jina Anne: Then what was great about it was we found, let’s say a color doesn’t pass contrast ratios. I didn’t have to then notify the Android team and the iOS team and the web team. I just made that change and then they would get that change automatically the next time that they would pull in the latest. So, it just really helped streamline a lot of that and helped us be able to take off some of the burdens of updating visual designs from the engineers and that let us do that.

Drew: So, instead of being sort of variables within one particular code base, within your own React codebase or within your PHP or within your Java or wherever, they’re like variables across an entire organization? Is that fair to say?

Jina Anne: Correct. Correct. Then what’s cool is things like colors, for example, like transparent colors, you do that differently in Android, like eight-digit hex, instead of RGBA like you would with web. So that tool that you use, if you’re using one that is built to think through all this, does that transformation for you. So, rather than saying RGBA 50 comma, 40 comma, whatever the color, you can just say color background card or something like that. It’s really more of a named entity now and then you can all be speaking the same language, even though it might render a different syntax.

Drew: Right. So, although variables kind of the nuts and bolts of how it might be implemented, the idea is kind of much bigger than just what you’d think of as just variables. I mean, I guess in a way like RSS could be called just variables. But, actually, the way it enables us to distribute blog content and podcasts and everything has a much wider impact than just the core technology that’s there.

Jina Anne: Yeah, I think that’s actually a really good metaphor. I do see a lot of people when they use it or talk about it in their own design system website, they’re usually only talking about like Sass variables or CSS variables. I think that’s why there’s this confusion, like, “Well, isn’t that just variables?” It’s, like, “Why are we renaming it?” But it is that much broader application of it with a whole process around it. It even gets into like how you distribute those variables across components, like on a global level or on an individual component level. You can have multi-layers and so on. It can get pretty interesting.

Drew: So, I suppose as well as helping in the maintenance, you mentioned being able to change a color in one central location and then have everything that is, using those design tokens, be able to pick it up when the next build or next refresh from the system, presumably this has the potential to enable all sorts of other interesting things. I know a lot of people make sort of white-labeled products. It’s the same core product, but it’s customized with different design tweaks for different and things. So, using design tokens could actually be a solution for those sorts of applications as well, the need to span more than just one particular codebase.

Jina Anne: Right. Yeah. So, that was definitely a use case at Salesforce. We have a lot of, I don’t know why I’m still using present tense, but we had a lot of customers that wanted to be able to brand their UI that they were using. So, we had this concept of certain variables that we wanted to actually be seen more as like a constant, like maybe it’s an error color versus colors that were meant to be configured, like brandable colors. So, for some people’s needs that can get interesting, too, white labeling or offering any sort of theming, dark mode or night mode, even offering a feature, which you may have seen in Gmail, but it’s like that comfortable, cozy, compact spacing density. So, there are all sorts of extra stuff that you can get with it across multiple products very quickly, which is really nice.

Drew: It is really an extension of core principles of programming where you make sure that you’ve really defined things once in one place, so you don’t have multiple instances so it’s easy to update. But it is looking at that as a much, much bigger idea than just one small element of a product, looking at it across everything and centralizing that.

Jina Anne: Yeah, so we definitely looked at these as our source of truth. However, in case anybody is worried about like, “Well, Android does things differently than iOS,” or you might have some concerns there. Depending on how you’ve architected things, you can still solve for those use cases. So, we would have a global token set that all our products would basically import in, but then we made them in a way where you could either alter it for that particular context or extend it, like offer maybe additional tokens that only that particular context needs. So, you can still give the fine-tune experience that you need to give to each of those context, while bringing in the most common shared things.

Drew: On a technical level, how would this actually work? Is there like a common file format the different systems share? Is there like an established standard for how you declare your design tokens?

Jina Anne: It’s interesting that you asked that. There’s actually a community group formed through… W3C has all these community groups. It’s not really exactly a working group, but it’s still like an initiative across various people that are in this space trying to come up with a recommendation of what those standards could be. Even how people store their data can change. Like it could be YAML, it could be JSON, it could even be a spreadsheet. Then what you export would be different because you might be using Sass, you might be using LESS, you might be using some sort of XML base system. We actually don’t want to tell you which of those things to use because depending on our use case, you might need to use spreadsheets instead of JSON or YAML or you might need to use XML instead of Sass or LESS or even CSS variables. That’s because everybody’s products are so different and have different needs.

Jina Anne: But what we can standardize on is around the tooling to generate these things. The reason we want to try to come to some sort of standard is because so many design tools are starting to implement this, InVision, Adobe, Figma. All these tools are looking at design tokens because there is a need to not just make this a code-based thing, but make this a design tool-driven thing as well. We don’t want to do it in a way where those tools don’t feel like they can innovate. We want them to be able to innovate, but at least offer some sort of standards so that new tool-makers can get into this space and already have sort of an established understanding of how to set that up. So, while we’re not going to get strict on your format of what file format you’re using or what tool you’re using, we’re going to more try to standardize on like the internal process and basically the API of it.

Drew: Because like I said, once that API has been defined, the tooling can spring up around it that speaks with that API for whatever tools that people want to use. So, somebody could write up a Java library that speaks that API, and then anything that’s using Java could make use of it and so on. Are there any tools currently that support design tokens in any way?

Jina Anne: Yeah. On the code side, I mentioned already Theo and Style Dictionary. There’s also one called Diez, D-I-E-Z. That’s kind of newer to the space and it’s taking it beyond, just like doing the transformation process, but kind of treating design tokens as a component in a way and so that’s cool.

Jina Anne: Then on the design side, InVision already has it in their DSM tool, which is their Design System Manager tool. The last I looked at it, it was just colors and typography, but I do know when I… I talked to Evan, who is one of the main folks behind that product. He did tell me other things like spacing should be coming into play, if it’s not already. I haven’t looked at it super recently. I know there are newer tools that are really catching my eye, like modules and interplay. Both of those are code-driven design tools.

Jina Anne: Then I’ve been told that it’s supposed to come into some of the stuff that Figma and Adobe are doing, so I’m not sure if I’m revealing secrets. I don’t think I am. I think it’s all stuff they’ve talked about publicly. But, yeah, I’m really excited because I think while it was something that we were doing really just making our design system work easier, it’s kind of almost accidentally created a path for bringing design tools and code cluster together. That’s really exciting to me.

Drew: The makers of these various tools, are they working with the design tokens community group?

Jina Anne: Yeah, a lot of them have joined. Since I’m a chair member, I get to see by email, everybody who joins. It sends me a notice. What’s cool is not only just seeing all these design tool people joining, but also seeing big companies. I saw like Google and Salesforce and all that, so it’s really exciting. Because I think it shows that this really matters to where a lot of people are doing on a large scale and small scale and that’s pretty cool.

Drew: So, if I was sort of listening to this and thinking about my own projects, thinking, “Ah, yes, design tokens are absolutely the answer to all these problems that I’m having,” where would I go to find out more to start learning and start maybe using design tokens?

Jina Anne: It’s a really good question. There are a few articles and I can send you some links to include with this, but I think one of the first articles, which I wish I had written, but Nathan Curtis wrote and that he actually kind of helped bring attention to them. I think he inspired a lot of people to start using them, so he kind of discusses what they are and how to use them, his recommended way.

Jina Anne: I don’t like the title of this next article I’m going to mention, but it’s called Design Tokens for Dummies. I’m not a fan of using that terminology, but it is a pretty well thought-through article that goes to pretty much everything about them. There was a CSS Tricks article by Robin Randall recently that just explains really what they are. I did a All You Can Learn Library session for Jared Spool a while back, but it is a membership-based thing so you would have to have access to that to see it. I know there’s been a lot of presentations and stuff, but there’s not like an official book to it yet. But that’s perhaps something I’m working on. It’s like one of two books I’m working on, actually.

Drew: So, if I’m a toolmaker or I work for maybe a big organization that’s having these sorts of problems and they’ve got some ideas about maybe contributing to the process of designing how the standard works, is the design tokens community group something that I could get involved in?

Jina Anne: Absolutely. I think you’ll want a GitHub because that’s where all of the public discussions and notes and things are happening. Then on the W3C community group website, you can create an account there. Having that account enables you to join other community groups as well. But then, yeah, at that point once you’ve created your account there and… I think it asks if you have any affiliations, like if you work for a big company or anything like that, just so it’s transparent, like if you have any, I wouldn’t say necessarily bias, but like a certain interest. It just helps everybody understand where you’re coming from. Anyway, at that point, yeah, you join and you’re pretty much in.

Drew: It’s quite an open process then.

Jina Anne: Yeah.

Drew: What’s in the future for design tokens? What’s coming down the line?

Jina Anne: I’m really excited about what’s going on with the community group. Kaelig’s been doing most of the leading of it. He’s the co-chair with me and I really love seeing his passion behind this. My particular interests in this are really around the education of it. So, kind of similarly to the work I’ve been doing with the Sass community, I kind of want to do a little bit of that for the design token community, like talk through how to educate people on what this is and not just make it an API doc, but also like where to get started, how to get into this. That’s something I’m interested in project-wise.

Jina Anne: I’m also really keen to see where this evolves, especially with all these design tool companies getting involved. Then a lot of people mostly think about design tokens as a visual abstraction, but really what it came from was the same technology that you used for localizing content. You wrap things in strings and then you can pass through different stuff, so bringing it back to its roots. I’d love to see the application of this apply in different ways, like interactions and content. I’m not really super keen on AR/VR-type stuff, but how does it maybe manifest there? Yeah, really just seeing it kind of go beyond just like the visual layer of what we see.

Drew: I guess that’s the beauty of having an open process like the W3C community group, is that people who do have specialisms in things like AR and VR can contribute to the conversation and bring their expertise to it as well.

Jina Anne: Absolutely.

Drew: I’ve been learning a lot about design tokens today. What have you been learning about lately?

Jina Anne: I’m always trying to learn something, but I’ve actually been occasionally taking some cocktail classes. Yeah. I’m not really with the interest of becoming a bartender, but more of just having an appreciation for cocktails. What’s cool about these classes is they’re beyond just making cocktails. They actually talk about business practices and ethical practices, the hygiene of your bar, all sorts of stuff like that, so it’s been really fascinating because I think I have like this weird fantasy of one-day leaving tech and maybe going into that. Let’s see.

Drew: Do you have a favorite cocktail?

Jina Anne: Manhattan.

Drew: It’s good. It’s good.

Jina Anne: Yeah.

Drew: You can’t go wrong with a Manhattan.

Jina Anne: I have been ordering a lot of Old Fashioneds lately so that would probably be number two.

Drew: Do you have a favorite bourbon?

Jina Anne: Ooh. The first one that came to mind is Angel’s Envy. It’s like finished in port barrels that have kind of this slightly port-like essence to it. Their rye is really good, too. It’s like finished in rum barrels, so it almost has like a banana bread-like flavor to it.

Drew: This is a direction I wasn’t expecting to go in today.

Jina Anne: Yeah.

Drew: Was there anything else you’d like to talk about design tokens?

Jina Anne: My take is, just like with design systems, people are going to use them in different ways and also there might be people out there that don’t even need to use this. If you just have like an editorial website that is pretty straightforward, maybe all you really need are CSS variables and that’s it. There’s no need to over-engineer things.

Jina Anne: This is really more for people that really need to scale or if you have a theming context then maybe. But, yeah, it’s really not meant for everyone. So, just because it’s becoming kind of a hot thing to talk about, you might not need to even bother with it.

Drew: If you, dear listener, would like to hear more from Jina, you can follow her on Twitter where she’s @Jina, or find her and all her projects on the web at sushiandrobots.com. Thanks for joining us today, Jina. Do you have any parting words?

Jina Anne: Design systems are for people.

Smashing Editorial
(dm, ra, il)

Source: Smashing Magazine, Smashing Podcast Episode 3 With Jina Anne: What Are Design Tokens?

Collective #566

dreamt up by webguru in Uncategorized | Comments Off on Collective #566


C566_gifolio

Gifolio

Gifolio is a brilliant collection of design portfolios presented using animated GIFs. By Roll Studio.

Check it out



C566_masks

Masks

An interactive presentation on masking techniques originally created for a Creative Front-end Belgium meetup hosted by Reed. By Thomas Di Martino.

Check it out




C566_mike

Supermaya

Supermaya is an Eleventy starter kit designed to help you add rich features to a blog or website without the need for a complicated build process.

Check it out


C566_darkmode

Dark Mode

Varun Vachhar shares the challenges he encountered when migrating from Jekyll to Gatsby related to dark mode.

Read it












C566_fresh

Fresh Folk

A beautiful mix-and-match illustration library of people and objects made by Leni Kauffman.

Check it out


C566_arcticvault

GitHub Archive Program

The GitHub Archive Program will safely store every public GitHub repo for 1,000 years in the Arctic World Archive in Svalbard, Norway.

Check it out



Collective #566 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #566

Abstracting WordPress Code To Reuse With Other CMSs: Concepts (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on Abstracting WordPress Code To Reuse With Other CMSs: Concepts (Part 1)

Abstracting WordPress Code To Reuse With Other CMSs: Concepts (Part 1)

Abstracting WordPress Code To Reuse With Other CMSs: Concepts (Part 1)

Leonardo Losoviz



Making code that is agnostic of the CMS or framework has several benefits. For instance, through its new content editor Gutenberg, WordPress enables to code components which can be used for other CMSs and frameworks too, such as for Drupal and for Laravel. However, Gutenberg’s emphasis on re-utilization of code is focused on the client-side code of the component (JavaScript and CSS); concerning the component’s backend code (such as the provision of APIs that feed data to the component) there is no pre-established consideration.

Since these CMSs and frameworks (WordPress, Drupal, Laravel) all run on PHP, making their PHP code re-usable too will make it easier to run our components on all these different platforms. As another example, if we ever decide to replace our CMS with another one (as has recently happened that many people decried WordPress after its introduction of Gutenberg), having the application code be agnostic from the CMS simplifies matters: The more CMS-agnostic our application code is, the less effort will be required to port it to other platforms.

Starting with application code built for a specific CMS, the process of transforming it to CMS-agnostic is what, in this article, I will call “abstracting code”. The more abstract the code is, the more it can be re-used to work with whichever CMS.

Making the application completely CMS-agnostic is very tough though — even possibly impossible — since sooner or later it will need to depend on the specific CMS’s opinionatedness. Then, instead of attempting to achieve 100% code reusability, our goal must simply be to maximize the amount of code that is CMS-agnostic to make it reusable across different CMSs or frameworks (for the context of this article, these 2 terms will be used interchangeably). Then, migrating the application to a different framework will be not without pain, but at least it will be as painless as possible.

The solution to this challenge concerns the architecture of our application: We must keep the core of the application cleanly decoupled from the specifics of the underlying framework, by coding against interfaces instead of implementations. Doing so will grant additional benefits to our codebase: We can then focus our attention almost exclusively on the business logic (which is the real essence and purpose of the application), causing the code to become more understandable and less muddled with the limitations imposed by the particular CMS.

This article is composed of 2 parts: In this first part we will conceptualize and design the solution for abstracting the code from a WordPress site, and on the 2nd part we will implement it. The objective shall be to keep the code ready to be used with Symfony components, Laravel framework, and October CMS.

Code Against Interfaces, Rely On Composer, Benefit From Dependency Injection

The design of our architecture will be based on the following pillars:

  1. Code against interfaces, not implementations.
  2. Create packages, distribute them through Composer.
  3. Dependency Injection to glue all parts together.

Let’s analyze them one by one.

Code Against Interfaces, Not Implementations

Coding against interfaces is the practice of interacting with a certain piece of code through a contract. A contract, which is set up through an interface from our programming language (PHP in our case since we are dealing with WordPress), establishes the intent of certain functionality, by explicitly stating what functions are available, what inputs are expected for each function, and what each function will return, and it is not concerned with how the functionality must be implemented. Then, our application can be cleanly decoupled from a specific implementation, not needing to know how its internals work, and being able to change to another implementation at any time without having to drastically change code. For instance, our application can store data by interacting with an interface called DataStoreInterface instead of any of its implementations, such as instances of classes DatabaseDataStore or FilesystemDataStore.

In the context of WordPress, this implies that — by the end of the abstraction — no WordPress code will be referenced directly, and WordPress itself will simply be a service provider for all the functions that our application needs. As a consequence, we must consider WordPress as a dependency of the application, and not as the application itself.

Contracts and their implementations can be added to packages distributed through Composer and glued together into the application through dependency injection which are the items we will analyze next.

Create Packages, Distribute Them Through Composer

Remember this: Composer is your friend! This tool, a package manager for PHP, allows any PHP application to easily retrieve packages (i.e. code) from any repository and install them as dependencies.

Note: I have already described how we can use Composer together with WordPress in a previous article I wrote earlier this year.

Composer is itself CMS-agnostic, so it can be used for building any PHP application. Packages distributed through Composer, though, may be CMS-agnostic or not. Therefore, our application should depend on CMS-agnostic packages (which will work for any CMS) as much as possible, and when not possible, depend on the corresponding package that works for our specific CMS.

This strategy can be used to code against contracts, as explained earlier on. The packages for our application can be divided into two types: CMS-agnostic and CMS-specific ones. The CMS-agnostic package will contain all the contracts and all generic code, and the application will exclusively interact with these packages. For each CMS-agnostic package containing contracts, we must also create a CMS-specific package containing the implementation of the contracts for the required CMS, which is set into the application by means of dependency injection (which we’ll analyze below).

For example, to implement an API to retrieve posts, we create a CMS-agnostic package called “Posts”, with contract PostAPIInterface containing function getPosts, like this:

interface PostAPIInterface
{
  public function getPosts($args);
}

This function can be resolved for WordPress through a package called “Posts for WordPress”, which resolves the contract through a class WPPostAPI, implementing function getPosts to simply execute WordPress function get_posts, like this:

class WPPostAPI implements PostAPIInterface
{
  public function getPosts($args) {
    return get_posts($args);
  }
}

If we ever need to port our application from WordPress to another CMS, we must only implement the corresponding CMS-specific package for the new CMS (e.g. “Posts for October CMS”) and update the dependency injection configuration matching contracts to implementations, and that’s it!

Note: It is a good practice to create packages that only define contracts and nothing else. This way, it is easy for implementers to know exactly what must be implemented.

Dependency Injection To Glue All Parts Together

Dependency injection is a technique that allows declaring which object from the CMS-specific package (aka the “service provider”) is implementing which interface from the CMS-agnostic package (aka the “contract”), thus gluing all parts of the application together in a loosely-coupled manner.

Different CMSs or frameworks may already ship with their own implementation of a dependency injection component. For instance, whereas WordPress doesn’t have any, both Symfony and Laravel have their own solutions: DependencyInjection component and Service Container respectively.

Ideally, we should keep our application free from choosing a specific dependency injection solution, and leave it to the CMS to provide for this. However, dependency injection must be used also to bind together generic contracts and services, and not only those depending on the CMS (for instance, a contract DataStoreInterface, resolved through service provider FilesystemDataStore, may be completely unrelated to the underlying CMS). In addition, a very simple application that does not require an underlying CMS will still benefit from dependency injection. Hence, we are compelled to choose a specific solution for dependency injection.

Note: When choosing a tool or library, prioritize those ones which implement the corresponding PHP Standards Recommendation (in our case, we are interested in PSR-11), so they can be replaced without affecting the application code as much as possible (in practice, each solution will most likely have a custom initialization, so some re-writing of application code may be unavoidable).

Choosing The Dependency Injection Component

For my application, I have decided to use Symfony’s DependencyInjection component which, among other great features, can be set-up through YAML and XML configuration files, and it supports autowiring, which automatically resolves how different services are injected into one another, greatly reducing the amount of configuration needed.

For instance, a service Cache implementing a contract CacheInterface, like this one:

namespace MyPackageMyProject;
class Cache implements CacheInterface
{
  private $cacheItemPool;
  private $hooksAPI;

  public function __construct(
    CacheItemPoolInterface $cacheItemPool, 
    HooksAPIInterface $hooksAPI
  ) {
    $this->cacheItemPool = $cacheItemPool;
    $this->hooksAPI = $hooksAPI;
  }

  // ...
}

… can be set as the default service provider through the following services.yaml configuration file:

services:
  _defaults:
    bind:
      MyPackageMyProjectHooksAPIInterface: '@hooks_api'

  hooks_api:
    class: MyPackageMyProjectContractImplementationsHooksAPI

  cache:
    class: MyPackageMyProjectCache
    public: true
    arguments:
      $cacheItemPool: '@cache_item_pool'

  cache_item_pool:
    class: SymfonyComponentCacheAdapterFilesystemAdapter

As it can be observed, class cache requires two parameters in its constructor, and these are resolved and provided by the dependency injection component based on the configuration. In this case, while parameter $cacheItemPool is manually set, parameter $hooksAPI is automatically resolved through type-hinting (i.e. matching the expected parameter’s type, with the service that resolves that type). Autowiring thus helps reduce the amount of configuration required to glue the services and their implementations together.

Make Your Packages As Granular As Possible

Each package must be as granular as possible, dealing with a specific objective, and containing no more or less code than is needed. This is by itself a good practice in order to avoid creating bloated packages and establishing a modular architecture, however, it is mandatory when we do not know which CMS the application will run on. This is because different CMSs are based on different models, and it is not guaranteed that every objective can be satisfied by the CMS, or under what conditions. Keeping packages small and objective then enables to fulfill the required conditions in a progressive manner, or discard using this package only when its corresponding functionality can’t be satisfied by the CMS.

Let’s take an example: If we come from a WordPress mindset, we could initially assume that entities “posts” and “comments” will always be a part of the Content Management System, and we may include them under a package called “CMS core”. However, October CMS doesn’t ship with either posts or comments in its core functionality, and these are implemented through plugins. For the next iteration, we may decide to create a package to provide for these two entities, called “Posts and Comments”, or even “Posts” under the assumption that comments are dependent on posts and bundled with them. However, once again, the plugins in October CMS don’t implement these two together: There is a plugin implementing posts and another plugin implementing comments (which has a dependency on the posts plugin). Finally, our only option is to implement two separate packages: “Posts” and “Comments”, and assign a dependency from the latter to the former one.

Likewise, a post in WordPress contains post meta attributes (i.e. additional attributes to those defined in the database model) and we may assume that every CMS will support the same concept. However, we can’t guarantee that another CMS will provide this functionality and, even if it did, its implementation may be so different than that from WordPress that not the same operations could be applied to the meta attributes.

For example, both WordPress and October CMS have support for post meta attributes. However, whereas WordPress stores each post meta value as a row on a different database table than where the post is stored, October CMS stores all post meta values in a single entry as a serialized JSON object in a column from the post table. As a consequence, WordPress can fetch posts filtering data based on the meta value, but October CMS cannot. Hence, the package “Posts” must not include the functionality for post meta, which must then be implemented on its own package “Post Meta” (satisfiable by both WordPress and October CMS), and this package must not include functionality for querying the meta attributes when fetching posts, which must then be implemented on its own package “Post Meta Query” (satisfiable only by WordPress).

Identifying Elements That Need To Be Abstracted

We must now identify all the pieces of code and concepts from a WordPress application that need be abstracted for it to run with any other CMS. Digging into an application of mine, I identified the following items:

  • accessing functions
  • function names
  • function parameters
  • states (and other constant values)
  • CMS helper functions
  • user permissions
  • application options
  • database column names
  • errors
  • hooks
  • routing
  • object properties
  • global state
  • entity models (meta, post types, pages being posts, and taxonomies —tags and categories—)
  • translation
  • media

As long as it is, this list is not yet complete. There are many other items that need abstraction, which I will not presently cover. Such items include dealing with the location of assets (some framework may require to place image/font/JavaScript/CSS/etc. files on a specific directory) and CLI commands (WordPress has WP-CLI, Symfony has the console component, and Laravel has Artisan, and there are commands for each of these which could be unified).

In the next (and final) part of this series of articles, we will proceed to implement the abstraction for all the items identified above.

Evaluating When It Makes Sense To Abstract The Application

Abstracting an application is not difficult, but, as shall be observed in the next article, it involves plenty of work, so we must consider carefully if we really need it or not. Let’s consider the advantages and disadvantages of abstracting the application’s code:

Advantages

  • The effort required to port our application to other platforms is greatly reduced.
  • Because the code reflects our business logic and not the opinionatedness of the CMS, it is more understandable.
  • The application is naturally organized through packages which provide progressive enhancement of functionalities.

Disadvantages

  • Extra ongoing work.
  • Code becomes more verbose.
  • Longer execution time from added layers of code.

There is no magic way to determine if we’ll be better off by abstracting our application code. However, as a rule of thumb, I’ll propose the following approach:

Concerning a new project, it makes sense to establish an agnostic architecture, because the required extra effort is manageable, and the advantages make it well worth it; concerning an existing project, though, the one-time effort to abstract it could be very taxing, so we should analyze what is more expensive (in terms of time and energy): the one-time abstraction, or maintaining several codebases.

Conclusion

Setting-up a CMS-agnostic architecture for our application can allow to port it to a different platform with minimal effort. The key ingredients of this architecture are to code against interfaces, distribute these through granular packages, implement them for a specific CMS on a separate package, and tie all parts together through dependency injection.

Other than a few exceptions (such as deciding to choose Symfony’s solution for dependency injection), this architecture attempts to impose no opinionatedness. The application code can then directly mirror the business logic, and not the limitations imposed by the CMS.

In the next part of this series, we will implement the code abstraction for a WordPress application.

Smashing Editorial
(rb, dm, yk, il)

Source: Smashing Magazine, Abstracting WordPress Code To Reuse With Other CMSs: Concepts (Part 1)

Collective #565

dreamt up by webguru in Uncategorized | Comments Off on Collective #565



C565_2019

The 2019 Web Almanac

The Web Almanac is an annual state of the web report combining the expertise of the web community with the data and trends of the HTTP Archive.

Check it out







C565_gauges

Gauges

Amelia Wattenberger coded up a gauge example from Fullstack D3’s Dashboard Design chapter as a React component.

Check it out









C565_innerwolf

My Inner Wolf

An eclectic visual composition of our inner worlds: a project on absence epilepsy seizures by Moniker in collaboration with Maartje Nevejan.

Check it out








Collective #565 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #565