Collective #534

Gradient Magic A collection of very modern and unique CSS gradients. Check it out This content is sponsored via Syndicate Ads Collect feedback and track website bugs visually with BugHerd Capture, collaborate and communicate on visual feedback wherever you work with BugHerd. Collect and Read more

Collective #518

dreamt up by webguru in Uncategorized | Comments Off on Collective #518



C518_TakeBackWeb

Take Back Your Web

“Own your domain. Own your content. Own your social connections. Own your reading experience.” —Tantek Çelik talks about how we can use IndieWeb services, tools, and standards to take back our web.

Watch it



Screen-Shot-2019-05-23-at-19.26.07

SCAR

SCAR is a deployment stack that make it easy for you to deploy a static website with a custom domain, SSL, and a CDN

Check it out



C518_animation

Animation Handbook

A free ebook that will teach you how to use animation to demonstrate abstract concepts, make products feel more life-like, and instill more emotion into digital experiences. By Ryan McLeod.

Get it
















Screen-Shot-2019-05-23-at-19.49.16

Font Kiko

Font Kiko is a pack of more than 700 Open Source icons suitable for many types of projects.

Get it




C518_undesign

Undesign

A collection of free design tools and resources for makers, developers and designers.

Check it out

Collective #518 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #518

Creating Your Own React Validation Library: The Features (Part 2)

dreamt up by webguru in Uncategorized | Comments Off on Creating Your Own React Validation Library: The Features (Part 2)

Creating Your Own React Validation Library: The Features (Part 2)

Creating Your Own React Validation Library: The Features (Part 2)

Kristofer Selbekk



Implementing a validation library isn’t all that hard. Neither is adding all of those extra features that make your validation library much better than the rest.

This article will continue implementing the validation library we started implementing in the previous part of this article series. These are the features that are going to take us from a simple proof of concept to an actual usable library!

  • Part 1: The Basics
  • Part 2: The Features
  • Part 3: The Experience (Coming up next week)

Only Show Validation On Submit

Since we’re validating on all change events, we’re showing the user error messages way too early for a good user experience. There are a few ways we can mitigate this.

The first solution is simply providing the submitted flag as a returned property of the useValidation hook. This way, we can check whether or not the form is submitted before showing an error message. The downside here is that our “show error code” gets a bit longer:

<label>
  Username
  <br />
  <input {...getFieldProps('username')} />
  {submitted && errors.username && (
    
{errors.username}
)} </label>

Another approach is to provide a second set of errors (let’s call them submittedErrors), which is an empty object if submitted is false, and the errors object if it’s true. We can implement it like this:

const useValidation = config => {
  // as before
  return {
    errors: state.errors,
    submittedErrors: state.submitted ? state.errors : {},
  };
}

This way, we can simply destructure out the type of errors that we want to show. We could, of course, do this at the call site as well — but by providing it here, we’re implementing it once instead of inside all consumers.

Show Error Messages On-Blur

A lot of people want to be shown an error once they leave a certain field. We can add support for this, by tracking which fields have been “blurred” (navigated away from), and returning an object blurredErrors, similar to the submittedErrors above.

The implementation requires us to handle a new action type — blur, which will be updating a new state object called blurred:

const initialState = {
  values: {},
  errors: {},
  blurred: {},
  submitted: false,
};

function validationReducer(state, action) {
  switch (action.type) {
    // as before
    case 'blur':
      const blurred = { 
        ...state.blurred, 
        [action.payload]: true 
      }; 
      return { ...state, blurred };
    default:
      throw new Error('Unknown action type');
  }
}

When we dispatch the blur action, we create a new property in the blurred state object with the field name as a key, indicating that that field has been blurred.

The next step is adding an onBlur prop to our getFieldProps function, that dispatches this action when applicable:

getFieldProps: fieldName => ({
  // as before
  onBlur: () => {
    dispatch({ type: 'blur', payload: fieldName });
  },
}),

Finally, we need to provide the blurredErrors from our useValidation hook so that we can show the errors only when needed.

const blurredErrors = useMemo(() => {
    const returnValue = {};
    for (let fieldName in state.errors) {
      returnValue[fieldName] = state.blurred[fieldName]
        ? state.errors[fieldName]
        : null;
    }
    return returnValue;
  }, [state.errors, state.blurred]);
return {
  // as before
  blurredErrors,
};

Here, we create a memoized function that figures out which errors to show based on whether or not the field has been blurred. We recalculate this set of errors whenever the errors or blurred objects change. You can read more about the useMemo hook in the documentation.

Time For A Tiny Refactor

Our useValidation component is now returning three sets of errors — most of which will look the same at some point in time. Instead of going down this route, we’re going to let the user specify in the config when they want the errors in their form to show up.

Our new option — showErrors — will accept either “submit” (the default), “always” or “blur”. We can add more options later, if we need to.

function getErrors(state, config) {
  if (config.showErrors === 'always') {
    return state.errors;
  }
  if (config.showErrors === 'blur') {
    return Object.entries(state.blurred)
      .filter(([, blurred]) => blurred)
      .reduce((acc, [name]) => ({ ...acc, [name]: state.errors[name] }), {});
  }
  return state.submitted ? state.errors : {};
}
const useValidation = config => {
  // as before
  const errors = useMemo(
    () => getErrors(state, config), 
    [state, config]
  );

  return {
    errors,
    // as before
  };
};

Since the error handling code started to take most of our space, we’re refactoring it out into its own function. If you don’t follow the Object.entries and .reduce stuff — that’s fine — it’s a rewrite of the for...in code in the last section.

If we required onBlur or instant validation, we could specify the showError prop in our useValidation configuration object.

const config = {
  // as before
  showErrors: 'blur',
};
const { getFormProps, getFieldProps, errors } = useValidation(config);
// errors would now only include the ones that have been blurred
Note On Assumptions

“Note that I’m now assuming that each form will want to show errors the same way (always on submit, always on blur, etc). That might be true for most applications, but probably not for all. Being aware of your assumptions is a huge part of creating your API.”

Allow For Cross-Validation

A really powerful feature of a validation library is to allow for cross-validation — that is, to base one field’s validation on another field’s value.

To allow this, we need to make our custom hook accept a function instead of an object. This function will be called with the current field values. Implementing it is actually only three lines of code!

function useValidation(config) {
  const [state, dispatch] = useReducer(...);
  if (typeof config === 'function') {
    config = config(state.values);
  }
}

To use this feature, we can simply pass a function that returns the configuration object to useValidation:

const { getFieldProps } = useValidation(fields => ({ 
  password: {
    isRequired: { message: 'Please fill out the password' },
  },
  repeatPassword: {
    isRequired: { message: 'Please fill out the password one more time' },
    isEqual: { value: fields.password, message: 'Your passwords don’t match' }
  }
}));

Here, we use the value of fields.password to make sure two password fields contain the same input (which is terrible user experience, but that’s for another blog post).

Add Some Accessibility Wins

A neat thing to do when you’re in charge of the props of a field is to add the correct aria-tags by default. This will help screen readers with explaining your form.

A very simple improvement is to add aria-invalid="true" if the field has an error. Let’s implement that:

const useValidation = config => {
  // as before
  return {
    // as before
    getFieldProps: fieldName => ({
      // as before
      'aria-invalid': String(!!errors[fieldName]),
    }),
  }
};

That’s one added line of code, and a much better user experience for screen reader users.

You might wonder about why we write String(!!state.errors[fieldName])? state.errors[fieldName] is a string, and the double negation operator gives us a boolean (and not just a truthy or falsy value). However, the aria-invalid property should be a string (it can also read “grammar” or “spelling”, in addition to “true” or “false”), so we need to coerce that boolean into its string equivalent.

There are still a few more tweaks we could do to improve accessibility, but this seems like a fair start.

Shorthand Validation Message Syntax

Most of the validators in the calidators package (and most other validators, I assume) only require an error message. Wouldn’t it be nice if we could just pass that string instead of an object with a message property containing that string?

Let’s implement that in our validateField function:

function validateField(fieldValue = '', fieldConfig, allFieldValues) {
  for (let validatorName in fieldConfig) {
    let validatorConfig = fieldConfig[validatorName];
    if (typeof validatorConfig === ’string') {
      validatorConfig = { message: validatorConfig };
    }
    const configuredValidator = validators[validatorName](validatorConfig);
    const errorMessage = configuredValidator(fieldValue);

    if (errorMessage) {
      return errorMessage;
    }
  }
  return null;
}

This way, we can rewrite our validation config like so:

const config = {
  username: {
    isRequired: 'The username is required',
    isEmail: 'The username should be a valid email address',
  },
};

Much cleaner!

Initial Field Values

Sometimes, we need to validate a form that’s already filled out. Our custom hook doesn’t support that yet — so let’s get to it!

Initial field values will be specified in the config for each field, in the property initialValue. If it’s not specified, it defaults to an empty string.

We’re going to create a function getInitialState, which will create the initial state of our reducer for us.

function getInitialState(config) {
  if (typeof config === 'function') {
    config = config({});
  }
  const initialValues = {};
  const initialBlurred = {};
  for (let fieldName in config.fields) {
    initialValues[fieldName] = config.fields[fieldName].initialValue || '';
    initialBlurred[fieldName] = false;
  }
  const initialErrors = validateFields(initialValues, config.fields);
  return {
    values: initialValues,
    errors: initialErrors,
    blurred: initialBlurred,
    submitted: false,
  };
}

We go through all fields, check if they have an initialValue property, and set the initial value accordingly. Then we run those initial values through the validators and calculate the initial errors as well. We return the initial state object, which can then be passed to our useReducer hook.

Since we’re introducing a non-validator prop into the fields config, we need to skip it when we validate our fields. To do that, we change our validateField function:

function validateField(fieldValue = '', fieldConfig) {
  const specialProps = ['initialValue'];
  for (let validatorName in fieldConfig) {
    if (specialProps.includes(validatorName)) {
      continue;
    }
    // as before
  }
}

As we keep on adding more features like this, we can add them to our specialProps array.

Summing Up

We’re well on our way to create an amazing validation library. We’ve added tons of features, and we’re pretty much-thought leaders by now.

In the next part of this series, we’re going to add all of those extras that make our validation library even trend on LinkedIn. Stay tuned!

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Creating Your Own React Validation Library: The Features (Part 2)

What’s Happening With GDPR And ePR? Where Does CookiePro Fit In?

dreamt up by webguru in Uncategorized | Comments Off on What’s Happening With GDPR And ePR? Where Does CookiePro Fit In?

What’s Happening With GDPR And ePR? Where Does CookiePro Fit In?

What’s Happening With GDPR And ePR? Where Does CookiePro Fit In?

Suzanne Scacca



(This is a sponsored article.) Is privacy an issue on the web? According to this ConsumerMan piece from NBC News a few years back, it is:

The Internet has become a serious threat to our privacy.
— Jeff Chester of the Center for Digital Democracy

Your online profile is being sold on the web. It’s kind of crazy and it’s not harmless.
— Sharon Goott Nissim of the Electronic Privacy Information Center

There are no limits to what types of information can be collected, how long it can be retained, with whom it can be shared or how it can be used.
— Susan Grant of the Consumer Federation of America

While there’s been talk of introducing a “Do Not Track” program into U.S. legislation, the EU is the first one to actually take steps to make the Internet a safer place for consumers.

On May 25, 2018, the General Data Protection Regulation (GDPR) was enacted. Soon to follow will be the ePrivacy Regulation (ePR).

With these initiatives holding businesses accountable for the information they track and use online, web developers have to add another thing to their list of requirements when building a website:

The protection of user privacy.

In this post, we’re going to look at:

  • Where we currently stand with GDPR,
  • What changes we’ve seen on the web as a result,
  • What’s coming down the line with ePR,
  • And take a look CookiePro Cookie Consent tool that helps web developers make their websites compliant now.

GDPR: Where Are We Now?

With the one-year anniversary of GDPR upon us, now is a great time to talk about what the updated legislation has done for online privacy.

GDPR Recap

It’s not like the EU didn’t have privacy directives in place before. As Heather Burns explained in a Smashing Magazine article last year:

All of the existing principles from the original Directive stay with us under GDPR. What GDPR adds is new definitions and requirements to reflect changes in technology which simply did not exist in the dialup era. It also tightens up requirements for transparency, disclosure and, process: lessons learned from 23 years of experience.

One other key change that comes with moving from the previous privacy directive to this privacy regulation is that it’s now consistently implemented across all EU states. This makes it easier for businesses to implement digital privacy policies and for governing bodies to enforce them since there’s no longer any question of what one country has done with the implementation of the law. It’s the same for all.

What’s more, there are clearer guidelines for web developers that are responsible for implementing a privacy solution and notice on their clients’ websites.

Has GDPR Led to Any Changes in How Websites Handle Data?

It seems as though many companies are struggling to get compliant with GDPR, based on a test done by Talend in the summer of 2018. They sent data requests to over a hundred companies to see which ones would provide the requested information, per the new GDPR guidelines.

Here is what they found:

  • Only 35% of EU-based companies complied with the requests while 50% outside of the EU did.
  • Only 24% of retail companies responded (which is alarming considering the kind of data they collect from consumers).
  • Finance companies seemed to be the most compliant; still, only 50% responded.
  • 65% of companies took over 10 days to respond, with the average response time being 21 days.

What Talend suggests, then, is that digital services (e.g. SaaS, mobile apps, e-commerce) are more likely to fall in line with GDPR compliance. It’s the other companies — those that didn’t start as digital companies or who have older legacy systems — that are struggling to get onboard.

Regardless of what actions have been taken by businesses, they know they must do it.

A 2018 report published by McDermott Will & Emery and Ponemon Institute showed that, despite businesses’ inability to be compliant, they were scared of what would happen if they were found not to be:

GDPR report - failure to comply costs

Data on what businesses believed to be the greatest costs of failing to comply with GDPR. (Source: McDermott Will & Emery and Ponemon Institute) (Large preview)

Those that said they feared financial repercussions were right to do so. The GDPR assesses fines based on how severe the infringement is:

  • Lower level offenses result in fines of up to €10 million or 2% of the the revenue made in the prior fiscal year.
  • Upper level offenses result in fines of up to €20 million or 4%.

Some high-profile cases of fines have already popped up in the news, too.

Google received a €50 million penalty for committing a number of violations.

Mainly, the issue taken with Google is that it buries its privacy policies and consent so deep that most consumers never find it. What’s more, a lot of their privacy policies are ambiguous or unclear, which leads users to “Accept” without really understanding what they’re accepting.

Facebook is another company we shouldn’t be too surprised to see in GDPR’s crosshairs.

Their penalty was only for £500,000. That’s because the fine was assessed for grievances issued between 2007 and 2014 — before GDPR went into place. It’ll be interesting to see if Facebook changes its privacy policies in light of the much larger sum of money they’ll owe when another inevitable breach occurs.


It’s not just the monetary fine businesses should be nervous about when failing to comply with GDPR.

Stephen Eckersley of the UK Information Commissioner’s Office said that, after the GDPR went into effect, the amount of data breach reports increased exponentially.

In June of 2018, there were 1,700 reports of companies in violation of GDPR. Now, the average is roughly 400 a month. Even so, Eckersley estimates that there will be double the amount of reports in 2019 than there were in previous years (36,000 vs. 18,000).

So, not only are the governing bodies willing to penalize businesses for failure to comply. It seems that consumers are fed up enough (and empowered!) to report more of these violations now.

Let’s Talk About ePR For A Second

The ePrivacy Regulation has not yet become law, but it’s expected to soon enough. That’s because both GDPR and ePR were drafted to work together to update the old Data Protection Directive.

ePR is an update to Article 7 in the EU Charter of Human Rights. GDPR is an update to Article 8.

EU Charter of Human Rights

The Freedoms laid out by the EU Charter of Human Rights. (Source: EU Charter of Human Rights) (Large preview)

Although they’re separately defined, it’s best to think of ePR as an enhancement of GDPR. So, not only do businesses have to take care with data collected from individuals, the ePR says that they have to be careful with protecting the identity of individuals, too.

As such, when the ePR rolls out, all digital communications between business and consumer will be protected. That includes:

  • Skype chats
  • Facebook messages
  • VoiP calls
  • Email marketing
  • Push notifications
  • And more.

If a consumer has not expressly given permission for a business to contact them, the ePR will prohibit them from doing so. In fact, the ePR will take it a step further and give more control to consumers when it comes to cookies management.

Rather than display a pop-up consent notice that asks “Is it okay if we use cookies to store your data?”, consumers will decide what happens through their browser settings.

However, we’re not at that point yet, which means it’s your job to get that notice up on your website and to make sure you’re being responsible with how their data is collected, stored and used.

What Web Developers Need To Do To Protect Visitor Privacy

Do a search for “How to Avoid Being Tracked Online”:

A sample Google search

Search for “How to Avoid Being Tracked Online” on Google. (Source: Google) (Large preview)

There are over 57 million pages that appear in Google’s search results. Do similar keyword searches and you’ll also find endless pages and forum submissions where consumers express serious concerns over the information gathered about them online, wanting to know how to “stop cookies”.

Clearly, this is a matter that keeps consumers up at night.

The GDPR should be your motivation to go above and beyond in putting their minds at ease.

While you probably won’t have a hand in the actual data management or usage of data within the business, you can at least help your clients get their websites in order. And, if you already did this when GDPR initially was enacted, now would be a good time to revisit what you did and make sure their websites are still in compliance.

Just make sure that your client is safely handling visitor data and protecting their privacy before providing any sort of privacy consent statement. Those statements and their acceptance of them are worthless if the business isn’t actually fulfilling its promise.

Once that part of the compliance piece is in place, here’s what you need to do about cookies:

1. Understand How Cookies Work

Websites allow businesses to gather lots of data from visitors. Contact forms collect info on leads. eCommerce gateways accept methods of payment. And then there are cookies:

Cookies are pieces of data, normally stored in text files, that websites place on visitors’ computers to store a range of information, usually specific to that visitor — or rather the device they are using to view the site — like the browser or mobile phone.

There are some that collect bare-bones details that are necessary to provide visitors with the best experience. Like preserving a logged-in session as visitors move from page to page. Or not displaying a pop-up after a visitor dismissed it on a recent visit.

There are other cookies, usually from third-party tracking services, that pry deeper. These are the ones that track and later target visitors for the purposes of marketing and advertising.

Regardless of where the cookies come from or what purpose they serve, the fact of the matter is, consumers are being tracked. And, until recently, websites didn’t have to inform them when that took place or how much of their data was stored.

2. Don’t Use Cookies That Are Irrelevant

There’s no getting around the usage of cookies. Without them, you wouldn’t have access to analytics that tell you who’s visiting your website, where they come from and what they’re doing while they’re there. You also wouldn’t be able to serve up personalized content or notifications to keep their experience with the site feeling fresh.

That said, do you even know what kinds of cookies your website uses right now?

Before you go implementing your own cookie consent notice for visitors, make sure you understand what exactly it is you’re collecting from them.

Go to the CookiePro website and run a free scan on your client’s site:

CookiePro website privacy scan

CookiePro offers a free website privacy scan. (Source: CookiePro) (Large preview)

After you enter your URL and start the scan, you’ll be asked to provide just a few details about yourself and the company. The scan will start and you’ll receive a notice that says you’ll receive your free report within 24 hours.

Just to give you an idea of what you might see, here are the report results I received:

CookiePro scan

CookiePro runs a scan on all data collection elements and trackers. (Source: Cookie Consent) (Large preview)

As you can see, CookiePro does more than just tell me how many or which cookies my website has. It also includes forms that are gathering data from visitors as well as tags.

Be sure to review your report carefully. If you’re tracking data that’s completely unnecessary and unjustified for a website of this nature to get ahold of, that needs to change ASAP. Why put your clients’ business at risk and compromise visitor trust if you’re gathering data that has no reason to be in their hands?

CookiePro scan results

CookiePro’s cookies report tells you what purpose they serve and where they come from. (Source: Cookie Consent) (Large preview)

Note: if you sign up for an account with CookiePro, you can run your own cookie audit from within the tool (which is part of the next step).

3. Provide Transparency About Cookie Usage

GDPR isn’t trying to discourage businesses from using cookies on their websites or other marketing channels. What it’s doing, instead, is encouraging them to be transparent about what’s happening with data and then be responsible with it once they have it.

So, once you know what sort of cookies you’re using and data you’re handling, it’s time to inform your visitors about this cookie usage.

Keep in mind that this shouldn’t just be served to EU-based visitors. While those are the only ones protected under the regulation, what could it hurt to let everyone know that their data and identity are protected when they’re on your website? The rest of the world will (hopefully) follow, so why not be proactive and get consent from everyone now?

To provide transparency, a simple entry notice is all you need to display to visitors.

For example, here is one from Debenhams:

Debenhams cookies notice

This is an example of a cookies notice found on the Debenhams website. (Source: Debenhams) (Large preview)

As you can see, it’s not as simple as asking visitors to “Accept” or “Reject” cookies. They’re also given the option to manage them.

To add your own cookies entry banner and advanced options, use CookiePro’s Cookie Consent tool.

Signup is easy — if you start with the free plan, it takes just a few seconds to sign up. Within an hour, you’ll receive your login credentials to get started.

Cookie Consent dashboard

A peek inside the CookiePro Cookie Consent Dashboard. (Source: Cookie Consent) (Large preview)

Before you can create your cookie consent banner, though, you must add your website to the tool and run a scan on it. (You may have already completed that in the prior step).

When the scan is complete, you can start creating your cookie banner:

Create banner with Cookie Consent

Creating a cookie banner within the Cookie Consent tool. (Source: Cookie Consent) (Large preview)

By publishing a cookie consent banner to your website, you’re taking the first big step to ensuring that visitors know that their data and identity is being protected.

4. Make Your Cookie Consent Form Stand Out

Don’t stop at simply adding a cookie banner to your website. As Vitaly Friedman explained:

In our research, the vast majority of users willingly provide consent without reading the cookie notice at all. The reason is obvious and understandable: many customers expect that a website ‘probably wouldn’t work or the content wouldn’t be accessible otherwise.’ Of course, that’s not necessarily true, but users can’t know for sure unless they try it out. In reality, though, nobody wants to play ping-pong with the cookie consent prompt and so they click the consent away by choosing the most obvious option: ‘OK.’

While ePR will eventually rid of us of this issue, you can do something about it now — and that’s to design your cookie consent form to stand out.

A word of caution: be careful with using pop-ups on a mobile website. Although consent forms are one of the exceptions to Google’s penalty against entry pop-ups, you still don’t want to compromise the visitor experience all for the sake of being GDPR compliant.

As such, you might be better off using a cookie banner at the top or bottom of the site and then designing it really stand out.

What’s nice about CookiePro is that you can customize everything, so it really is yours to do with as you like. For example, here is one I designed:

Cookie Consent preview

A preview of a cookie consent banner built with Cookie Consent. (Source: Cookie Consent) (Large preview)

You can change:

  • Text color
  • Button color
  • Background color.

You can write your own copy for each element:

  • Header
  • Message
  • Cookie policy note
  • Cookie policy settings
  • Accept button.

And you get to decide how the banner will function if or when visitors engage with it.

5. Educate Visitors on Cookies

In addition to giving your cookie consent banner a unique look, use it as a tool to educate visitors on what cookies are and why you’re even using them. That’s what the Cookie Settings area is for.

With Cookie Consent, you can inform visitors about the different types of cookies that are used on the website. They then have the choice to toggle different ones on or off based on their comfort level.

That’s what’s so nice about CookiePro taking care of the cookie scan for you. That way, you know what kinds of cookies you actually have in place. All you have to do, then, is go to your Cookie List and choose which descriptions you want to display to visitors:

Cookie List feature in CookiePro

CookiePro lets you educate visitors about cookies used on the site. (Source: Cookie Consent) (Large preview)

Just make sure you explain the importance of the most basic of cookies (“strictly necessary” and “performance) and why you recommend they leave them on. The rest you can provide explanations for in the hopes that their response will be, “Okay, yeah, I’d definitely like a personalized experience on this site.” If not, the choice is theirs to toggle off/on which kinds of cookies they want to be shown. And the Cookie Consent tool can help.

In other words, a cookie consent bar is not some superficial attempt to get consent. You’re trying to help them understand what cookies do and give them the power to influence their on-site experience.

Wrapping Up

There’s a lot we have to be thankful for with the Internet. It closes geographic gaps. It presents new opportunities for doing business. It enables consumers to buy pretty much anything they want with just a few clicks.

But as the Internet matures, the ways in which we build and use websites become more complex. And not just complex, but risky too.

GDPR and ePR have been a long time coming. As websites gather more data on consumers that can then be used by third parties or to follow them to other websites, web developers need to take a more active role in abiding by the new regulations while also putting visitors’ minds at ease. Starting with a cookie consent banner.

Smashing Editorial
(ms, yk, il)

Source: Smashing Magazine, What’s Happening With GDPR And ePR? Where Does CookiePro Fit In?

Switching From WordPress To Hugo

dreamt up by webguru in Uncategorized | Comments Off on Switching From WordPress To Hugo

Switching From WordPress To Hugo

Switching From WordPress To Hugo

Christopher Kirk-Nielsen



When WordPress 5 was released, I was excited about making use of the Gutenberg editor to create custom blocks, as posts on my personal blog had a couple of features I could turn into a block, making it easier to set up my content. It was definitely a cool thing to have, yet it still felt quite bloated.

Around the same time, I started reading more and more about static site generators and the JAMstack (this article by Chris Ferdinandi convinced me). With personal side projects, you can kind of dismiss a wide variety of issues, but as a professional, you have to ensure you output the best quality possible. Performance, security and accessibility become the first things to think about. You can definitely optimize WordPress to be pretty fast, but faster than a static site on a CDN that doesn’t need to query the database nor generate your page every time? Not so easy.

I thought that I could put this into practice with a personal project of mine to learn and then be able to use this for professional projects, and maybe some of you would like to know how, too. In this article, I will go over how I made the transition from WordPress to a specific static site generator named Hugo.

Hugo is built in Go, which is a pretty fast and easy to use language once you get used to the syntax, which I will explain. It all compiles locally so you can preview your site right on your computer. The project will then be saved to a private repository. Additionally, I will walk you through how to host it on Netlify, and save your images on a Git LFS (Large File Storage). Finally, we’ll have a look at how we can set up a content management system to add posts and images (similar to the WordPress backend) with Netlify CMS.

Note that all of this is absolutely free, which is pretty amazing if you ask me (although you’ll have to pay extra if you use up all your LFS storage or if your site traffic is intense). Also, I am writing this from a Bitbucket user point of view, running on a Mac. Some steps might be slightly different but you should be able to follow along, no matter what setup you use.

You’ll need to be somewhat comfortable with HTML, CSS, JS, Git and the command terminal. Having a few notions with templating languages such as Liquid could be useful as well, but we will review Hugo’s templates to get you started. I will, nonetheless, provide as many details as possible!

I know it sounds like a lot, and before I started looking into this, it was for me, too. I will try to make this transition as smooth as possible for you by breaking down the steps. It’s not very difficult to find all the resources, but there was a bit of guesswork involved on my part, going from one documentation to the next.

  1. Exporting The Content From WordPress
  2. Preparing Your Blog Design
  3. Setting Up A New Repository
  4. Activating Git LFS (Optional)
  5. Creating The Site On Netlify
  6. Preparing For Netlify Large Media (Optional)
  7. Setting Up Hugo On Your Computer
  8. Creating Your Custom Theme
  9. Notes On The Hugo Syntax
  10. Content And Data
  11. Deploying On Netlify
  12. Setting Up A Custom Domain
  13. Editing Content On Netlify CMS

Note: If you have trouble with some of these, please let me know in the comments and I’ll try to help, but please note this is destined to be applied to a simple, static blog that doesn’t have a dozen widgets or comments (you can set that up later), and not a company site or personal portfolio. You undoubtedly could, though, but for the sake of simplicity, I’ll stick to a simple, static blog.

Prerequisites

Before we do anything, let’s create a project folder where everything from our tools to our local repository is going to reside. I’ll call it “WP2Hugo” (feel free to call it anything you want).

This tutorial will make use of a few command line tools such as npm and Git. If you don’t have them already, install those on your machine:

With these installed, let’s get started!

1. Exporting The Content From WordPress

First off, we’ll need to export your content from WordPress: posts, pages, and uploads. There are a few tools available that Hugo mentions but personally, only one of them worked: blog2md. This one works by running a JavaScript file with Node.js in your command terminal. It takes the XML files exported by WordPress, and outputs Markdown files with the right structure, converting your HTML to Markdown and adding what is called the Front Matter, which is a way to format metadata at the start of each file.

Go to your WordPress admin, and open the Tools menu, Export submenu. You can export what you want from there. I’ll refer to the exported file as YOUR-WP-EXPORT.xml.

The WordPress back-end interface with arrows indicating each step to reach the export feature.

WordPress export tool (Large preview)

You can select exactly what data you want to export from your WordPress blog.

Inside our WP2Hugo folder, I recommend creating a new folder named blog2md in which you’ll place the files from the blog2md tool, as well as your XML export from WordPress (YOUR-WP-EXPORT.xml). Also, create a new folder in there called out where your Markdown posts will go. Then, open up your command terminal, and navigate with the cd command to your newly created “blog2md” folder (or type cd with a space and drag the folder into the terminal).

You can now run the following commands to get your posts:

npm install
node index.js w YOUR-WP-EXPORT.xml out

Look into the /WP2Hugo/blog2md/out directory to check whether all of your posts (and potential pages) are there. If so, you might notice there’s something about comments in the documentation: I had a comment-free blog so I didn’t need them to be carried through but Hugo does offer several options for comments. If you had any comments on WordPress, you can export them for later re-implementation with a specialized service like Disqus.

If you’re familiar enough with JS, you can tweak the index.js file to change how your post files will come out by editing the wordpressImport function. You may want to capture the featured image, remove the permalink, change the date format, or set the type (if you have posts and pages). You’ll have to adapt it to your needs, but know that the loop (posts.forEach(function(post){ ... })) runs through all the posts from the export, so you can check for the XML content of each post in that loop and customize your Front Matter.

Additionally, if you need to update URLs contained in your posts (in my case, I wanted to make image links relative instead of absolute) or the date formatting, this is a good time to do so, but don’t lose sleep over it. Many text editors offer bulk editing so you can plug in a regular expression and make the changes you want across your files. Also, you can run the blog2md script as many times as needed, as it will overwrite any previously existing files in the output folder.

Once you have your exported Markdown files, your content is ready. The next step is to get your WordPress theme ready to work in Hugo.

2. Preparing Your Blog Design

My blog had a typical layout with a header, a navigation bar, content and sidebar, and a footer — quite simple to set up. Instead of copying pieces of my WordPress theme, I rebuilt it all from scratch to ensure there was no superfluous styles or useless markup. This is a good time to implement new CSS techniques (pssst… Grid is pretty awesome!) and set up a more consistent naming strategy (something like CSS Wizardry’s guidelines). You can do what you want, but remember we’re trying to optimize our blog, so it’s good to review what you had and decide if it’s still worth keeping.

Start by breaking down your blog into parts so you can clearly see what goes where. This will help you structure your markup and your styles. By the way, Hugo has the built-in ability to compile Sass to CSS, so feel free to break up those styles into smaller files as much as you want!

A blog layout with a banner up top, with a menu below it. The main area has a large section for content and a smaller side area for secondary content. At the bottom is a footer with a copyright note and links to the author’s Twitter page and their email.

A very simple blog layout. (Large preview)

When I say simple, I mean really simple.

Alternatively, you can completely bypass this step for now, and style your blog as you go when your Hugo site is set up. I had the basic markup in place and preferred an iterative approach to styles. It’s also a good way to see what works and what doesn’t.

3. Setting Up A New Repository

Now that that is out of the way, we need to set up a repository. I’m going to assume you will want to create a new repository for this, which is going to be a great opportunity to use Git LFS (Large File System). The reason I advise to do this now is that implementing Git LFS when you already have hundreds of images is not as smooth. I’ve done it, but it was a headache you’re likely to want to avoid. This will also provide other benefits down the road with Netlify.

While I’ll be doing all this via Bitbucket and their proprietary Git GUI, Sourcetree, you can absolutely do this with GitHub and GitLab and their own desktop tools. You can also do it directly in the command terminal, but I like to automate and simplify the process as much as I can, reducing the risk of making silly mistakes.

When you’ve created your new repository on the Git platform of your choice, create an empty folder inside your local project folder (WP2Hugo), e.g. hugorepo, then open up your command terminal or Git GUI tool and initialize your local Git repository; then, link it to the remote repository (you can usually find the exact command to use on the newly created remote repository).

I’d recommend creating a dev (or stage) branch so that your main branch is strictly used for production deployments. It’ll also limit new builds to be generated only when you’re done with a potential series of changes. Creating a branch can be done locally or on your repository’s remote webpage.

A guide to the various steps to get to the 'New branch' form on repositories. GitHub requires the user to click the active branch and type a new name in the input field. GitLab requires the user to click a 'plus' menu that reveals a dropdown menu with a 'New branch' link to a page with the form. Bitbucket requires the user to click the 'plus' in the general menu to slide out options and to click the 'Create a branch' link to access a new page with the form.

How to create a new branch on GitHub, GitLab and Bitbucket. (Large preview)

GitHub makes it easy to create a branch by clicking the branch switcher and typing a new name. On GitLab, you need to open the “Plus” dropdown to access the option. Bitbucket requires you to open the “Plus” menu on the left to open the slide-out menu and click “Create a branch” in the “Get to work” section.

4. Activating Git LFS (Optional)

Git Large File System is a Git feature that allows you to save large files in a more efficient way, such as Photoshop documents, ZIP archives and, in our case, images. Since images can need versioning but are not exactly code, it makes sense to store them differently from regular text files. The way it works is by storing the image on a remote server, and the file in your repository will be a text file which contains a pointer to that remote resource.

Alas, it’s not an option you just click to enable. You must set up your repository to activate LFS and this requires some work locally. With Git installed, you need to install a Git-LFS extension:

git lfs install

If, like me, that command didn’t work for you, try the Homebrew alternative (for macOS or Linux):

brew install git-lfs

Once that’s done, you’ll have to specify which files to track in your repository. I will host all of the images I uploaded in WordPress’s /upload folder in an identically-named folder on my Hugo setup, except that this folder will be inside a /static folder (which will resolves to the root once compiled). Decide on your folder structure, and track your files inside:

git lfs track "static/uploads/*"

This will track any file inside the /static/uploads folder. You can also use the following:

git lfs track "*.jpg"

This will track any and all JPG files in your repository. You can mix and match to only track JPGs in a certain folder, for example.

With that in place, you can commit your LFS configuration files to your repository and push that to your remote repository. The next time you locally commit a file that matches the LFS tracking configuration, it will be “converted” to an LFS resource. If working on a development branch, merge this commit into your main branch.

Let’s now take a look at Netlify.

5. Creating The Site On Netlify

At this point, your repository is set up, so you can go ahead and create an account on Netlify. You can even log in with your GitHub, GitLab or Bitbucket account if you like. Once on the dashboard, click the “New site from Git” button in the top right-hand corner, and create your new Netlify site.

Note: You can leave all the options at their default values for now.

The form displayed on Netlify when a user creates a new website, with build options left to their default, empty values.

Netlify’s new site creation page. (Large preview)

Select your Git provider: this will open a pop-up window to authenticate you. When that is done, the window will close and you’ll see a list of repositories on that Git provider you have access to. Select your freshly created repo and continue. You’ll be asked a few things, most of which you can just leave by default as all the options are editable later on.

For now, in the Site Settings, click “Change site name” and name your site anything you want — I’ll go with chris-smashing-hugo-blog. We will now be able to access the site via chris-smashing-hugo-blog.netlify.com: a beautiful 404 page!

6. Preparing For Netlify Large Media (Optional)

If you set up Git LFS and plan on using Netlify, you’ll want to follow these steps. It’s a bit more convoluted but definitely worth it: it’ll enable you to set query strings on image URLs that will be automatically transformed.

Let’s say you have a link to portrait.jpg which is an image that’s 900×1600 pixels. With Netlify Large Media, you can call the file portrait.jpg?nf_resize=fit&w=420, which will proportionally scale it. If you define both w and h, and set nf_resize=smartcrop, it’ll resize by cropping to focus on the point of interest of the image (as determined by a fancy algorithm, a.k.a. robot brain magic!). I find this to be a great way to have thumbnails like the ones WordPress generates, without needing several files for an image on my repository.

If this sounds appealing to you, let’s set it up!

The first step is installing Netlify’s command-line interface (CLI) via npm:

npm install netlify-cli -g

If it worked, running the command netlify should result in info about the tool.

You’ll then need to make sure you are in your local repository folder (that I named “hugorepo” earlier), and execute:

netlify login

Authorize the token. Next, we’ll have to install the Netlify Large Media plugin. Run:

netlify plugins:install netlify-lm-plugin
netlify lm:install

There should be a command line shown at the end of the resulting message that you must copy (which should look like /Users/YOURNAME/.netlify/helper/path.bash.inc on Mac) — run it. Note that Keychain might ask you for your machine’s administrator password on macOS.

The next step is to link Netlify:

netlify link

You can provide your site name here (I provided the chris-smashing-hugo-blog name I gave it earlier). With this in place, you just need to set up the Large Media feature by executing the following:

netlify lm:setup

Commit these new changes to your local repository, and push them to the remote development branch. I had a few errors with Sourcetree and Keychain along the lines of git "credential-netlify" is not a git command. If that’s your case, try to manually push with these commands:

git add -A
git commit -m "Set up Netlify Large media"
git push

If that didn’t work, you might need to install Netlify credential Helper. Here’s how to do it with Homebrew:

brew tap netlify/git-credential-netlify
brew install git-credential-netlify

Try pushing your commit through now (either with your GUI or command terminal): it should work!

Note: If you change your Netlify password, run netlify logout and netlify login again.

You might ask: “All this, and we still haven’t even initialized our Hugo build?” Yes, I know, it took a while but all the preparations for the transition are done. We can now get our Hugo blog set up!

7. Setting Up Hugo On Your Computer

You’ll first need to install Hugo on your computer with any of the provided options. I’ll be using Homebrew but Windows users can use Scoop or Chocolatey, or download a package directly.

brew install hugo

You’ll then need to create a new Hugo site but it won’t like setting it up in a non-empty folder. First option: you can create it in a new folder and move its contents to the local repository folder:

hugo new site your_temporary_folder

Second option: you can force it to install in your local repository with a flag, just make sure you’re running that in the right folder:

hugo new site . --force

You now have a Hugo site, which you can spin up with this command:

hugo server

You’ll get a local preview on localhost. Sadly, you have no content and no theme of your own. Not to worry, we’ll get that set up really soon!

Let’s first have a look at the configuration file (config.toml in my case): let’s set up the blog’s name and base URL (this must match the URL on your Netlify dashboard):

title = "Chris’ Smashing Hugo Blog"
baseURL = "https://chris-smashing-hugo-blog.netlify.com"

This link will be overwritten while you develop locally, so you shouldn’t run into 404 errors.

Let’s give Hugo our exported articles in Markdown format. They should be sitting in the /WP2Hugo/blog2md/out folder from the first step. In the Hugo folder (a.k.a. the local repository directory), access the content folder and create a subfolder named posts. Place your Markdown files in there, and then let’s get a theme set up.

8. Creating Your Custom Theme

For this step, I recommend downloading the Saito boilerplate, which is a theme with all the partials you’ll need to get started (and no styles) — a very useful starting point. You could, of course, look at this collection of ready-made themes for Hugo if you want to skip over this part of the process. It’s all up to you!

From the local repository folder, clone the theme into themes/saito:

git submodule add https://github.com/hakuoku/saito-boilerplate.git themes/saito  

You can rename this folder to anything you want, such as cool-theme. You’ll have to tell your Hugo configuration which theme you want to use by editing your config.toml/yaml/json file. Edit the theme value to saito, or cool-theme, or whatever your theme’s folder name is. Your preview should now show your blog’s title along with a copyright line. It’s a start, right?

Open the theme’s layout/partials/home.html file and edit it to display your content, limiting to the five first items which are of type posts (inside the content/posts/ folder), with range, first and where:

{{ range first 5 (where .Paginator.Pages "Type" "posts") }}

{{ .Title }}

{{ .Content }}
{{ end }}

Your content is now visible, in the most basic of ways. It’s time to make it yours — let’s dive in!

Templating With Hugo

You can first read the Introduction to Hugo templating if you like, but I’ll try to go over a few essentials that will help you understand the basics.

All operations in Hugo are defined inside delimiters: double curly braces (e.g. {{ .Title }}), which should feel familiar if you’ve done a bit of templating before. If you haven’t, think of it as a way to execute operations or inject values at a specific point in your markup. For blocks, they end with the {{ end }} tag, for all operations aside from shortcodes.

Themes have a layout folder which contains the pieces of the layout. The _default folder will be Hugo’s starting point, baseof.html being (you guessed it!) the base of your layout. It will call each component, called “partials” (more on this on Hugo’s documentation about Partial Template), similar to how you would use include in PHP, which you may have already seen in your WordPress theme. Partials can call other partials — just don’t make it an infinite loop.

You can call a partial with {{ partial "file.html" . }} syntax. The partial section is pretty straightforward, but the two other ones might need explaining. You might expect to have to write partials/file.html but since all partials are to be in the partials” folder, Hugo can find that folder just fine. Of course, you can create subfolders inside the “partials” folder if you need more organization.

You may have noticed a stray dot: this is the context you’re passing to your partial. If you had a menu partial, and a list of links and labels, you could pass that list into the partial so that it could only access to that list, and nothing else. I’ll talk more about this elusive dot in the next section.

Your baseof.html file is a shell that calls all the various partials needed to render your blog layout. It should have minimal HTML and lots of partials:

<!DOCTYPE html>
<html lang="{{ .Site.LanguageCode }}">
    <head>
        <title>{{ block "title" . }}{{ .Site.Title }}{{ end }}</title>
        {{ partial "head.html" . }}
    </head>
    <body>
        {{ partial "header.html" . }}
        {{ partial "nav.html" . }}

        <main>
            {{ block "main" . }}{{ end }}
        </main>

        <aside>
            {{ partial "sidebar.html" . }}
        </aside>

        {{ partial "footer.html" . }}
    </body>
</html>

The {{ block "main" . }}{{ end }} line is different because it is a block that is defined with a template based on the content of the current page (homepage, single post page, etc.) with {{ define "main" }}.

Stylesheets

In your theme, create a folder named assets in which we will place a css folder. It will contain our SCSS files, or a trusty ol’ CSS file. Now, there should be a css.html file in the partials folder (which gets called by head.html). To convert Sass/SCSS to CSS, and minify the stylesheet, we would use this series of functions (using the Hugo Pipes syntax instead of wrapping the functions around each other):

{{ $style := resources.Get "css/style.scss" | toCSS | minify | fingerprint }}

As a bonus — since I struggled to find a straight answer — if you want to use Autoprefixer, Hugo also implements PostCSS. You can add an extra pipe function between toCSS and minify on the first line, like so:

{{ $style := resources.Get "css/style.scss" | toCSS | postCSS | minify | fingerprint }}

Create a “postcss.config.js” file at the root of your Hugo blog, and pass in the options, such as:

module.exports = {
    plugins: {
        autoprefixer: {
            browsers: [
                "> 1%",
                "last 2 versions"
            ]
        }
    },
}

And presto! From Sass to prefixed, minified CSS. The “fingerprint” pipe function is to make sure the filename is unique, like style.c66e6096bdc14c2d3a737cff95b85ad89c99b9d1.min.css. If you change the stylesheet, the fingerprint changes, so the filename is different, and thus, you get an effective cache busting solution.

9. Notes On The Hugo Syntax

I want to make sure you understand “the Dot”, which is how Hugo scopes variables (or in my own words, provides a contextual reference) that you will be using in your templates.

The Dot And Scoping

The Dot is like a top-level variable that you can use in any template or shortcode, but its value is scoped to its context. The Dot’s value in a top-level template like baseof.html is different from the value inside loop blocks or with blocks.

Let’s say this is in our template in our head.html partial:

{{ with .Site.Title }}{{ . }}{{ end }}

Even though we are running this in the main scope, the Dot’s value changes based on context, which is .Site.Title in this case. So, to print the value, you only need to write . instead of re-typing the variable name again. This confused me at first but you get used to it really quick, and it helps with reducing redundancy since you only name the variable once. If something doesn’t work, it’s usually because you’re trying to call a top-level variable inside a scoped block.

So how do you use the top-level scope inside a scoped block? Well, let’s say you want to check for one value but use another. You can use $ which will always be the top-level scope:

{{ with .Site.Params.InfoEnglish }}{{ $.Site.Params.DescriptionEnglish }}{{ end }}

Inside our condition, the scope is .Site.Params.InfoEnglish but we can still access values outside of it with $, where intuitively using .Site.Params.DescriptionEnglish would not work because it would attempt to resolve to .Site.Params.InfoEnglish.Site.Params.DescriptionEnglish, throwing an error.

Custom Variables

You can assign variables by using the following syntax:

{{ $customvar := "custom value" }}

The variable name must start with $ and the assignment operator must be := if it’s the first time it’s being assigned, = otherwise like so:

{{ $customvar = "updated value" }}

The problem you might run into is that this won’t transpire out of the scope, which brings me to my next point.

Scratch

The Scratch functionality allows you to assign values that are available in all contexts. Say you have a list of movies in a movies.json file:

[
    {
        "name": "The Room",
        "rating": 4
    },
    {
        "name": "Back to the Future",
        "rating": 10
    },
    {
        "name": "The Artist",
        "rating": 7
    }
]

Now, you want to iterate over the file’s contents and store your favorite one to use later. This is where Scratch comes into play:

{{ .Scratch.Set "favouriteMovie" "None" }}{{ /* Optional, just to get you to see the difference syntax based on the scope */ }}

{{ range .Site.Data.movies }}
        {{ if ge .rating 10 }}
            {{ /* We must use .Scratch prefixed with a $, because the scope is .Site.Data.movies, at the current index of the loop */ }}
            {{ $.Scratch.Set "favouriteMovie" .name }}
        {{ end }}
{{ end }}
[...]
My favourite movie is {{ .Scratch.Get "favouriteMovie" }}
<!-- Expected output => My favourite movie is Back to the Future -->

With Scratch, we can extract a value from inside the loop and use it anywhere. As your theme gets more and more complex, you will probably find yourself reaching for Scratch.

Note: This is merely an example as this loop can be optimized to output this result without Scratch, but this should give you a better understanding of how it works.

Conditionals

The syntax for conditionals is a bit different from what you’d expect — from a JavaScript or PHP perspective. There are, in essence, functions which take two arguments (parenthesis optional if you call the values directly):

{{ if eq .Site.LanguageCode "en-us" }}Welcome!{{ end }}

There are several of these functions:

  • eq checks for equality
  • ne checks for inequality
  • gt check for greater than
  • ge check for great than or equal to
  • lt checks for lesser than
  • le checks for lesser than or equal to

Note: You can learn all about the functions Hugo offers in the Hugo Functions Quick Reference.

Whitespace

If you’re as picky about the output as I am, you might notice some undesired blank lines. This is because Hugo will parse your markup as is, leaving blank lines around conditionals that were not met, for example.

Let’s say we have this hypothetical partial:

{{ if eq .Site.LanguageCode "en-us" }}
<p>Welcome to my blog!</p>
{{ end }}
<img src="https//jamstack.org/uploads/portrait.jpg" alt="Blog Author">

If the site’s language code is not en-us, this will be the HTML output (note the three empty lines before the image tag):

<img src="https//jamstack.org/uploads/portrait.jpg" alt="Blog Author">

Hugo provides a syntax to address this with a hyphen beside the curly braces on the inside of the delimiter. {{- will trim the whitespace before the braces, and -}} will trim the whitespace after the braces. You can use either or both at the same time, but just make sure there is a space between the hyphen and the operation inside of the delimiter.

As such, if your template contains the following:

{{- if eq .Site.LanguageCode "en-us" -}}
<p>Welcome to my blog!</p>
{{- end -}}
<img src="https//jamstack.org/uploads/portrait.jpg" alt="Blog Author">

…then the markup will result in this (with no empty lines):

<img src="https//jamstack.org/uploads/portrait.jpg" alt="Blog Author">

This can be helpful for other situations like elements with display: inline-block that should not have whitespace between them. Conversely, if you want to make sure each element is on its own line in the markup (e.g. in a {{ range }} loop), you’ll have to carefully place your hyphens to avoid “greedy” whitespace trimming.

The example above would output the following if the site’s language code matches “en-us” (no more line breaks between the p and img tags):

<p>Welcome to my blog!</p><img src="https//jamstack.org/uploads/portrait.jpg" alt="Blog Author">

10. Content And Data

Your content is stored as Markdown files, but you can use HTML, too. Hugo will render it properly when building your site.

Your homepage will call the _default/list.html layout, which might look like this:

{{ define "main" }}
    {{ partial "list.html" . }}
{{ end }}

The main block calls the list.html partial with the context of ., a.k.a. the top level. The list.html partial may look like this:

{{ define "main" }}
<ol class="articles">
    {{ range .Paginator.Pages }}
        <li>
            <article>
                <a href="{{ .URL }}">
                    <h2>{{ .Title }}</h2>
                    <img src="{{ .Params.featuredimage }}" alt="">
                    <time datetime="{{ .Date.Format "2006-01-02" }}">
                        {{ .Date.Format "January 2 2006" }}
                    </time>
                </a>
            </article>
        </li>
    {{ end }}
</ol>
{{ partial "pagination.html" . }}
{{ end }}

Now we have a basic list of our articles, which you can style as you wish! The number of articles per page is defined in the configuration file, with paginate = 5 (in TOML).

You might be utterly confused as I was by the date formatting in Hugo. Each time the unit is mapped out to a number (first month, second day, third hour, etc.) made a lot more sense to me once I saw the visual explanation below that the Go language documentation provides — which is kind of weird, but kind of smart, too!

 Jan 2 15:04:05 2006 MST
=> 1 2  3  4  5    6  -7

Now all that’s left to do is to display your post on a single page. You can edit the post.html partial to customize your article’s layout:

<article>
    <header>
        <h1>{{ .Title }}</h1>
        <p>
            Posted on <time datetime="{{ .Date.Format "2006-01-02" }}">{{ .Date.Format "2006. 1. 2" }}</time>
        </p>
    </header>
    <section>
        {{ .Content }}
    </section>
</article>

And that’s how you display your content!

If you’d like to customize the URL, update your configuration file by adding a [permalinks] option (TOML), which in this case will make the URLs look like my-blog.com/post-slug/:

[permalinks]
    posts = ":filename/"

If you want to generate an RSS feed of your content (because RSS is awesome), add the following in your site configuration file (Saito’s default template will display the appropriate tags in head.html if these options are detected):

rssLimit = 10
[outputFormats]
    [outputFormats.RSS]
        mediatype = "application/rss"
        baseName = "feed"

But what if you had some sort of content outside of a post? That’s where data templates comes in: you can create JSON files and extract their data to create your menu or an element in your sidebar. YAML and TOML are also options but less readable with complex data (e.g. nested objects). You could, of course, set this in your site’s configuration file, but it is — to me — a bit less easy to navigate and less forgiving.

Let’s create a list of “cool sites” that you may want to show in your sidebar — with a link and a label for each site as an array in JSON:

{
    "coolsites": [
        { "link": "https://smashingmagazine.com", "label": "Smashing Magazine" },
        { "link": "http://gohugo.io/", "label": "Hugo" },
        { "link": "https://netlify.com", "label": "Netlify" }
    ]
}

You can save this file in your repository root, or your theme root, inside a data folder, such as /data/coolsites.json. Then, in your sidebar.html partial, you can iterate over it with range using .Site.Data.coolsites:

<h3>Cool Sites:</h3>
<ul>
{{ range .Site.Data.coolsites.coolsites }}
    <li><a href="{{ .link }}">{{ .label }}</a></li>
{{ end }}
</ul>

This is very useful for any kind of custom data you want to iterate over. I used it to create a Google Fonts list for my theme, which categories the posts can be in, authors (with bio, avatar and homepage link), which menus to show and in which order. You can really do a lot with this, and it is pretty straightforward.

A final thought on data and such: anything you put in your Hugo /static folder will be available on the root (/) on the live build. The same goes for the theme folder.

11. Deploying On Netlify

So you’re done, or maybe you just want to see what kind of magic Netlify operates? Sounds good to me, as long as your local Hugo server doesn’t return an error.

Commit your changes and push them to your remote development branch (dev). Head over to Netlify next, and access your site’s settings. You will see an option for “Build & deploy”. We’re going to need to change a couple of things here.

  1. First, in the “Build settings” section, make sure “Build command” is set to hugo and that “Publish directory” is set to public (the default that is recommended you keep on your Hugo config file);
  2. Next, in the “Deploy contexts” section, set “Production branch” to your main branch in your repository. I also suggest your “Branch deploys” to be set to “Deploy only the production branch”;
  3. Finally, in the “Environment variables” section, edit the variables and click “New variable”. We’re going to set the Hugo environment to 0.53 with the following pair: set key to HUGO_VERSION and value to 0.53.

Now head on over to your remote repository and merge your development branch into your main branch: this will be the hook that will deploy your updated blog (this can be customized but the default is reasonable to me).

Back to your Netlify dashboard, your site’s “Production deploys” should have some new activity. If everything went right, this should process and resolve to a “Published” label. Clicking the deploy item will open an overview with a log of the operations. Up top, you will see “Preview deploy”. Go on, click it — you deserve it. It’s alive!

12. Setting Up A Custom Domain

Having the URL as my-super-site.netlify.com isn’t to your taste, and you already own my-super-site.com? I get it. Let’s change that!

Head over to your domain registrar and go to your domain’s DNS settings. Here, you’ll have to create a new entry: you can either set an ALIAS/CNAME record that points to my-super-site.netlify.com, or set an A record that points your domain to Netlify’s load balancer, which is 104.198.14.52 at the time of writing.

You can find the latest information on Netlify’s documentation on custom domains. The load balancer IP will be in the DNS settings section, under “Manual DNS configuration for root and www custom domains”.

When that’s done, head over to your site’s dashboard on Netlify and click “Domain settings”, where you’ll see “Add custom domain”. Enter your domain name to verify it.

You can also manage your domains via your dashboard in the Domains tab. The interface feels less confusing on this page, but maybe it will help make more sense of your DNS settings as it did for me.

Note: Netlify can also handle everything for you if you want to buy a domain through them. It’s easier but it’s an extra cost.

After you’ve set up your custom domain, in “Domain settings”, scroll down to the “HTTPS” section and enable the SSL/TLS certificate. It might take a few minutes but it will grant you a free certificate: your domain now runs on HTTPS.

13. Editing Content On Netlify CMS

If you want to edit your articles, upload images and change your blog settings like you’d do on WordPress’ back-end interface, you can use Netlify CMS which has a pretty good tutorial available. It’s a single file that will handle everything for you (and it is generator-agnostic: it will work with Jekyll, Eleventy, and so on).

You just need to upload two files in a folder:

  • the CMS (a single HTML file);
  • a config file (a YAML file).

The latter will hold all the settings of your particular site.

Go to your Hugo root’s /static folder and create a new folder which you will access via my-super-site.com/FOLDER_NAME (I will call mine admin). Inside this admin folder, create an index.html file by copying the markup provided by Netlify CMS:

<!doctype html>
<html>
<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Content Manager</title>
</head>
<body>
<!-- Include the script that builds the page and powers Netlify CMS -->
    https://unpkg.com/netlify-cms@2.0.0/dist/netlify-cms.js
</body>
</html>

The other file you’ll need to create is the configuration file: config.yml. It will allow you to define your site’s settings (name, URL, etc.) so that you can set up what your posts’ front matter should contain, as well as how your data files (if any) should be editable. It’s a bit more complex to set up, but that doesn’t mean it isn’t easy.

If you’re using GitHub or GitLab, start your config.yml file with:

backend:
    name: git-gateway
    branch: dev # Branch to update (optional; defaults to master)

If you’re using Bitbucket, it’s a bit different:

backend:
    name: bitbucket
    repo: your-username/your-hugorepo
    branch: dev # Branch to update (optional; defaults to master)

Then, for our uploads, we’ll have to tell the CMS where to store them:

media_folder: "static/images/uploads" # Media files will be stored in the repo under static/images/uploads
public_folder: "/images/uploads" # The src attribute for uploaded media will begin with /images/uploads

When you create a new post, the CMS will generate the slug for the filename which you can customize with three options:

slug:
    encoding: "ascii" # You can also use "unicode" for non-Latin
    clean_accents: true # Removes diacritics from characters like é or å
    sanitize_replacement: "-" # Replace unsafe characters with this string

Finally, you’ll need to define how the data in your posts is structured. I will also define how the data file coolsites is structured — just in case I want to add another site to the list. These are set with the collections object which will definitely be the most verbose one, along with a nice handful of options you can read more about here.

collections:
    - name: "articles" # Used in routes, e.g., /admin/collections/blog
        label: "Articles" # Used in the Netlify CMS user interface
        folder: "content/posts" # The path to the folder where the posts are stored, usually content/posts for Hugo
        create: true # Allow users to create new documents in this collection
        slug: "{{slug}}" # Filename template, e.g., post-title.md
        fields: # The fields for each document, usually in front matter
            - {label: "Title", name: "title", widget: "string", required: true}
            - {label: "Draft", name: "draft", widget: "boolean", default: true }
            - {label: "Type", name: "type", widget: "hidden", default: "post" }
            - {label: "Publish Date", name: "date", widget: "date", format: "YYYY-MM-DD"}
            - {label: "Featured Image", name: "featuredimage", widget: "image"}
            - {label: "Author", name: "author", widget: "string"}
            - {label: "Body", name: "body", widget: "markdown"}
    - name: 'coolsites'
            label: 'Cool Sites'
            file: 'data/coolsites.json'
            description: 'Website to check out'
            fields:
                - name: coolsites
                    label: Sites
                    label_singular: 'Site'
                    widget: list
                    fields:
                        - { label: 'Site URL', name: 'link', widget: 'string', hint: 'https://…' }
                        - { label: 'Site Name', name: 'label', widget: 'string' }

Note: You can read more about how to configure individual fields in the Netlify CMS Widgets documentation which goes over each type of widget and how to use them — especially useful for date formats.

Authentication

The last thing we need to do is to ensure only authorized users can access the backend! Using your Git provider’s authentication is an easy way to go about this.

Head over to your Netlify site and click the “Settings” tab. Then go to “Access control” which is the last link in the menu on the left side. Here, you can configure OAuth to run via GitHub, GitLab or Bitbucket by providing a key and a secret value defined for your user account (not in the repository). You’ll want to use the same Git provider as the one your repo is saved on.

GitHub

Go to your “Settings” page on GitHub (click your avatar to reveal the menu), and access “Developer Settings”. Click “Register a new application” and provide the required values:

  • a name, such as “Netlify CMS for my super blog”;
  • a homepage URL, the link to your Netlify site;
  • a description, if you feel like it;
  • the application callback URL, which must be “https://api.netlify.com/auth/done”.

Save, and you’ll see your Client ID and Client Secret. Provide them to Netlify’s Access Control.

GitLab

Click your avatar to access the Settings page, and click “Applications” in the “User Settings” menu on the left. You’ll see a form to add a new application. Provide the following information:

  • a name, such as “Netlify CMS for my super blog”;
  • a redirect URI, which must be “https://api.netlify.com/auth/done”;
  • the scopes that should be checked are:
    • api
    • read_user
    • read_repository
    • write_repository
    • read_registry

Saving your application will give you your Application ID and Secret, that you can now enter on Netlify’s Access Control.

Bitbucket

Head over to your user account settings (click your avatar, then “Bitbucket settings”). Under “Access Management”, click “OAth”. In the “OAuth consumers” section, click “Add consumer”. You can leave most things at their default values except for these:

  • a name, such as “Netlify CMS for my super blog”;
  • a callback URL, which must be “https://api.netlify.com/auth/done”;
  • the permissions that should be checked are:
    • Account: Email, Read, Write
    • Repositories: Read, Write, Admin
    • Pull Requests: Read, Write
    • Webhooks: Read and write

After saving, you can access your key and secret, which you can then provide back on Netlify’s Access Control.

After providing the tokens, go to Netlify, and find the Site Settings. Head to “Identity” and enable the feature. You can now add an External Provider: select your Git provider and click on “Enable”.

In case you need additional details, Netlify CMS has an authentication guide you can read.

You can now access your Netlify site’s backend and edit content. Every edit is a commit on your repo, in the branch specified in your configuration file. If you kept your main branch as the target for Netlify CMS, each time you save, it will run a new build. More convenient, but not as clean with “in-between states”.

Having it save on a dev branch allows you to have finer control on when you want to run a new build. This is especially important if your blog has a lot of content and requires a longer build time. Either way will work; it’s just a matter of how you want to run your blog.

Also, please note that Git LFS is something you installed locally, so images uploaded via Netlify CMS will be “normal”. If you pull in your remote branch locally, the images should be converted to LFS, which you can then commit and push to your remote branch. Also, Netlify CMS does currently not support LFS so the image will not be displayed in the CMS, but they will show up on your final build.

Recommended reading: Static Site Generators Reviewed: Jekyll, Middleman, Roots, Hugo

Conclusion

What a ride! In this tutorial, you’ve learned how to export your WordPress post to Markdown files, create a new repository, set up Git LFS, host a site on Netlify, generate a Hugo site, create your own theme and edit the content with Netlify CMS. Not too bad!

What’s next? Well, you could experiment with your Hugo setup and read more about the various tools Hugo offers — there are many that I didn’t cover for the sake of brevity.

Explore! Have fun! Blog!

Further Resources

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Switching From WordPress To Hugo

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

dreamt up by webguru in Uncategorized | Comments Off on Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Paolo Mioni



Reactive JavaScript Frameworks (such as React, Vue.js, and Angular) are all the rage lately, and it’s no wonder that they are being used in more and more websites and applications due to their flexibility, modularity, and ease of automated testing.

These frameworks allow one to achieve new, previously-unthinkable things on a website or app, but how do they perform in terms of SEO? Do the pages that have been created with these frameworks get indexed by Google? Since with these frameworks all — or most — of the page rendering gets done in JavaScript (and the HTML that gets downloaded by bots is mostly empty), it seems that they’re a no-go if you want your websites to be indexed in search engines or even parsed by bots in general.

In this article, I will talk mostly about Vue.js, since it is the framework I’ve used most, and with which I have direct experiences in terms of indexing by the search engines on major projects, but I can assume that most of what I will cover is valid for other frameworks, too.

Replacing jQuery With Vue.js

Did you know that you can incorporate Vue into your project the same way that you would incorporate jQuery — with no build step necessary? Tell me more →

Some Background On The Problem

How Indexing Works

For your website to be indexed by Google, it needs to be crawled by Googlebot (an automated indexing software that visits your website and saves the contents of pages to its index) following links within each page. Googlebot also looks for special Sitemap XML files in websites to find pages that might not be linked correctly from your public site and to receive extra information on how often the pages in the website change and when they have last changed.

A Little Bit Of History

Until a few years ago (before 2009), Google used to index the content of a website’s HTML — excluding all the content created by JavaScript. It was common SEO knowledge that important links and content should not be written by JavaScript since it would not get indexed by Google, and it might cause a penalty for the website because Google might consider it “fake content” as if the website’s owner was trying to show users something different from what was shown to the search engines and trying to fool the latter.

It was very common practice by scammers to put a lot of SEO-friendly content in the HTML and hide it in JavaScript, for example. Google has always warned against this practice:

“Serving Googlebot different content than a normal user would see is considered cloaking, and would be against our Webmaster Guidelines.”

You could get penalized for this. In some cases, you could be penalized for serving different content to different user agents on the server side, but also for switching content via JavaScript after the page has loaded. I think this shows us that Google has been indexing websites executing JavaScript for a long time — at least for the sake of comparing the final HTML of the website (after JavaScript execution) and the raw HTML it was parsing for its indexes. But Googlebot did not execute JavaScript all the time, and Google was not using the JavaScript-generated content for indexing purposes.

Then, given the increased usage of AJAX to deliver dynamic content on websites, Google proposed an “AJAX crawling scheme” to help users index AJAX-based websites. It was very complicated; it basically required the website to produce a rendering of pages with AJAX content included. When requested by Google, the server would provide a version of the page with all (or most) of the content that would have been generated dynamically by JavaScript included in the HTML page — pre-rendered as an HTML Snapshot of the content. This process of having a server-side solution deliver content that was (for all other purposes) meant to be generated client-side, implied that those wanting to have a site that heavily relied on JavaScript indexed in Google had to go through a lot of technical hassles.

For example, if the content read by AJAX came from an external web service, it was necessary to duplicate the same web service calls server-side, and to produce, server-side, the same HTML that would have been produced client-side by JavaScript — or at least a very similar one. This was very complicated because, before the advent of Node.js, it required to at least partially duplicate the same rendering logic in two different programming languages: JavaScript for the frontend, and PHP, Java, Python, Ruby, and so on, on the backend. This is called “server-side rendering”, and it could lead to maintenance hell: if you made important changes to how you were rendering content in the frontend you had to duplicate those changes on the backend.

The only alternative to avoid duplicating the logic was to parse your own site with a browser executing JavaScript and save the end results to your server and serve those to Googlebot. This is sort of similar to what is now called “pre-rendering”.

Google (with its AJAX crawling scheme) also guaranteed that you would avoid penalties due to the fact that in this case you were serving different content to Googlebot and to the user. However, since 2015, Google has deprecated that practice with an official blog post that told website managers the following:

“Today, as long as you’re not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers.”

What this told us was not that Googlebot had suddenly acquired the capability of executing JavaScript when indexing web pages, since we know that it had done so for a very long time (at least to check for fake content and scams). Instead, it told us that the result of JavaScript execution would be indexed and used in SERPs.

This seems to imply that we don’t have to worry about providing Google with server-side rendered HTML anymore. However, we see all sorts of tools for server-side rendering and pre-rendering made available for JavaScript frameworks, it seems this is not the case. Also, when dealing with SEO agencies on big projects, pre-rendering seems to be considered mandatory. How come?

How Does Google Actually Index Pages Created With Front-End Frameworks?

The Experiment

In order to see what Google actually indexes in websites that have been created with a front-end framework, I built a little experiment. It does not cover all use cases, but it is at least a means to find out more about Google’s behavior. I built a small website with Vue.js and had different parts of text rendered differently.

The website’s contents are taken from the description of the book Infinite Jest by David Foster Wallace in the Infinite Jest Wiki (thanks guys!). There are a couple of introductory texts for the whole book, and a list of characters with their individual biography:

  • Some text in the static HTML, outside of the Vue.js main container;
  • Some text is rendered immediately by Vue.js because it is contained in variables which are already present in the application’s code: they are defined in the component’s data object;
  • #Some text is rendered by Vue.js from the data object, but with a delay of 300ms;
  • The character bios come from a set of rest APIs, which I’ve built on purpose using Sandbox. Since I was assuming that Google would execute the website’s code and stop after some time to take a snapshot of the current state of the page, I set each web service to respond with an incremental delay, the first with 0ms, the second with 300ms, the third with 600ms and so on up to 2700ms.

Each character bio is shortened and contains a link to a sub-page, which is available only through Vue.js (URLs are generated by Vue.js using the history API), but not server-side (if you call the URL of the page directly, you get no response from the server), to check if those got indexed too. I assumed that these would not get indexed, since they are not proper links which render server-side, and there’s no way that Google can direct users to those links directly. But I just wanted to check.

I published this little test site to my Github Pages and requested indexing — take a look.

The Results

The results of the experiment (concerning the homepage) are the following:

  • The contents which are already in the static HTML content get indexed by Google (which is rather obvious);
  • The contents which are generated by Vue in real-time always get indexed by Google;
  • The contents which are generated by Vue, but rendered after 300ms get indexed as well;
  • The contents which come from the web service, with some delay, might get indexed, but not always. I’ve checked Google’s indexing of the page in different moments, and the content which was inserted last (after a couple of seconds) sometimes got indexed, sometimes it didn’t. The content that gets rendered pretty quickly does get indexed most of the time, even if it comes from an asynchronous call to an external web service. This depends on Google having a render budget for each page and site, which depends on its internal algorithms, and it might vary wildly depending on the ranking of your site and the current state of Googlebot’s rendering queue. So you cannot rely on content coming from external web services to get indexed;
  • The subpages (as they are not accessible as a direct link) do not get indexed as expected.

What does this experiment tell us? Basically, that Google does index dynamically generated content, even if comes from an external web service, but it is not guaranteed that content will be indexed if it “arrives too late”. I have had similar experiences with other real, production websites besides this experiment.

Competitive SEO

Okay, so the content gets indexed, but what this experiment doesn’t tell us is: will the content be ranked competitively? Will Google prefer a website with static content to a dynamically-generated website? This is not an easy question to answer.

From my experience, I can tell that dynamically-generated content can rank in the top positions of the SERPS. I’ve worked on the website for a new model of a major car company, launching a new website with a new third-level domain. The site was fully generated with Vue.js — with very little content in the static HTML besides <title> tags and meta descriptions.

The site started ranking for minor searches in the first few days after publication, and the text snippets in the SERPs reported words coming directly from the dynamic content.

Within three months it was ranking first for most searches related to that car model — which was relatively easy since it was hosted on an official domain belonging to the car’s manufacturer, and the domain was heavily linked from reputable websites.

But given the fact that we had had to face strong opposition from the SEO company that was in charge of the project, I think that the result was still remarkable.

Due to the tight deadlines and lack of time given for the project, we were going to publish the site without pre-rendering.

Animated Text

What Google does not index is heavily-animated text. The site of one of the companies I work with, Rabbit Hole Consulting, contains lots of text animations, which are performed while the user scrolls, and require the text to be split into several chunks across different tags.

The main texts in the website’s home page are not meant for search engine indexing since they are not optimized for SEO. They are not made of tech-speak and do not use keywords: they are only meant to accompany the user on a conceptual journey about the company. The text gets inserted dynamically when the user enters the various sections of the home page.

Rabbit Hole Consulting

(Image source: Rabbit Hole Consulting) (Large preview)

None of the texts in these sections of the website gets indexed by Google. In order to get Google to show something meaningful in the SERPs, we added some static text in the footer below the contact form, and this content does show as part of the page’s content in SERPs.

The text in the footer gets indexed and shown in SERPs, even though it is not immediately visible to the users unless they scroll to the bottom of the page and click on the “Questions” button to open the contact form. This confirms my opinion that content does get indexed even if it is not shown immediately to the user, as long as it is rendered soon to the HTML — as opposed to being rendered on-demand or after a long delay.

What About Pre-Rendering?

So, why all the fuss about pre-rendering — be it done server-side or at project compilation time? Is it really necessary? Although some frameworks, like Nuxt, make it much easier to perform, it is still no picnic, so the choice whether to set it up or not is not a light one.

I think it is not compulsory. It is certainly a requirement if a lot of the content you want to get indexed by Google comes from external web service and is not immediately available at rendering time, and might — in some unfortunate cases — not be available at all due to, for example, web service downtime. If during Googlebot’s visits some of your content arrives too slowly, then it might not be indexed. If Googlebot indexes your page exactly at a moment in which you are performing maintenance on your web services, it might not index any dynamic content at all.

Furthermore, I have no proof of ranking differences between static content and dynamically-generated content. That might require another experiment. I think that it is very likely that, if content comes from external web service and does not load immediately, it might impact on Google’s perception of your site’s performance, which is a very important factor for ranking.

Recommended reading: How Mobile Web Design Affects Local Search (And What To Do About It)

Other Considerations

Compatibility

Up until recently, Googlebot used a fairly old version of Chromium (the open-source project on which Google Chrome is based), namely version 41. This meant that some recent JavaScript or CSS features could not be rendered by Google correctly (e.g. IntersectionObserver, ES6 syntax, and so on).

Google has recently announced that it is now running the latest version of Chromium (74, at the time of writing) in Googlebot, and that the version will be updated regularly. The fact that Google was running Chromium 41 might have had big implications for sites which decided to disregard compatibility with IE11 and other old browsers.

You can see a comparison of Chromium 41 and Chromium 74’s support for features here, however, if your site was already polyfilling missing features to stay compatible with older browsers, there should have been no problem.

Always use polyfills since you never know which browser misses support for features that you think are commonplace. For example, Safari did not support a major and very useful new feature like IntersectionObserver until version 12.1, which came out in March 2019.

JavaScript Errors

If you rely on Googlebot executing your JavaScript to render vital content, then major JavaScript errors which could prevent the content from rendering must be avoided at all costs. While bots might parse and index HTML which is not perfectly valid (although it is always preferable to have valid HTML on any site!), if there is a JavaScript error that prevents the loading of some content, then there is no way Google will index that content.

In any case, if you rely on JavaScript to render vital content to your end users, then it is likely that you already have extensive unit tests to check for blocking errors of any kind. Keep in mind, however, that Javascript errors can arise from unpredictable scenarios, for example, in case of improper handling of errors on API responses.

It is better to have some real-time error-checking software in place (such as Sentry or LogRocket) which will alert you of any edge-case errors you might not pick during unit or manual testing. This adds to the complexity of relying on JavaScript for SEO content.

Other Search Engines

The other search engines do not work as well as Google with dynamic content. Bing does not seem to index dynamic content at all, nor do DuckDuckGo or Baidu. Probably those search engines lack the resources and computing power that Google has in spades.

Parsing a page with a headless browser and executing JavaScript for a couple of seconds to parse the rendered content is certainly more resource-heavy than just reading plain HTML. Or maybe these search engines have made the choice not to scan dynamic content for some other reasons. Whatever the cause of this, if your project needs to support any of those search engines, you need to set up pre-rendering.

Note: To get more information on other search engines’ rendering capabilities, you can check this article by Bartosz Góralewicz. It is a bit old, but according to my experience, it is still valid.

Other Bots

Remember that your site will be visited by other bots as well. The most important examples are Twitter, Facebook, and other social media bots that need to fetch meta information about your pages in order to show a preview of your page when it is linked by their users. These bots will not index dynamic content, and will only show the meta information that they find in the static HTML. This leads us to the next consideration.

Subpages

If your site is a so-called “One Page website”, and all the relevant content is located in one main HTML, you will have no problem having that content indexed by Google. However, if you need Google to index and show any secondary page on the website, you will still need to create static HTML for each of those — even if you rely on your JavaScript Framework to check the current URL and provide the relevant content to put in that page. My advice, in this case, is to create server-side (or static) pages that at least provide the correct title tag and meta description/information.

Conclusions

The conclusions I’ve come to while researching this article are the following:

  1. If you only target Google, it is not mandatory to use pre-rendering to have your site fully indexed, however:
  2. You should not rely on third-party web services for content that needs to be indexed, especially if they don’t reply quickly.
  3. The content you insert into your HTML immediately via Vue.js rendering does get indexed, but you shouldn’t use animated text or text that gets inserted in the DOM after user actions like scrolling, etc.
  4. Make sure you test for JavaScript errors as they could result on entire pages/sections not being indexed, or your site not being indexed at all.
  5. If your site has multiple pages, you still need to have some logic to create pages that, while relying on the same front-end rendering system as the home page, can be indexed by Google as individual URLs.
  6. If you need to have different description and preview images for social media between different pages, you will need to address this too, either server-side or by compiling static pages for each URL.
  7. If you need your site to perform on search engines other than Google, you will definitely need pre-rendering of some sort.
Smashing Editorial
(dm, yk, il)

Acknowledgements: Many thanks to Sigrid Holzner of SEO Bavaria / Rabbit Hole Consulting for her review of this article.

Source: Smashing Magazine, Vue.js And SEO: How To Optimize Reactive Websites For Search Engines And Bots

Designing For Users Across Cultures: An Interview With Jenny Shen

dreamt up by webguru in Uncategorized | Comments Off on Designing For Users Across Cultures: An Interview With Jenny Shen

Designing For Users Across Cultures: An Interview With Jenny Shen

Designing For Users Across Cultures: An Interview With Jenny Shen

Rachel Andrew



In this video, we are pleased to feature Jenny Shen who is a UX Consultant and has worked with numerous startups and brands including Neiman Marcus, Crate&Barrel, eBuddy, IBM, TravelBird and Randstad. Her current focus is helping businesses innovate and designing inclusive product experiences for global users. She is interviewed by Jason Pamental, who has already spoken at our San Francisco conference. Jason is a strategist, designer, technologist, and author of Responsive Typography from O’Reilly.

In their conversation, we discover how we can approach localizing and internationalizing our websites, over and above simply offering a translation of the material. This is something that Jenny will also focus on in her talk at our Toronto SmashingConf.

Vitaly: Okay, hello everyone. I’m looking forward to having a wonderful conversation today. We have Jason with us today. Jason, how are we doing today?

Jason: I’m doing very well. I’m excited about this.

Vitaly: Oh yes.

Jason: Something new and fun.

Vitaly: This is new and fun. Some of you might know we have Smashing TV and Smashing TV is all about planning some sort of webinars and sessions and interviews and all that. We always look for new adventures. Jason, you like adventures?

Jason: Very much.

Vitaly: Who doesn’t like adventures? In this particular adventures, we’re looking into actually just having conversations. Like, you know, you take a cup of coffee, sit down with a person you admire or you like or you feel like they have something to share. You just have a conversation. This is not about slides, it’s not about presenting, it’s all about really just kind of human interaction between two people genuinely interested in a particular topic. And so, with that, I’m very privileged to have Jason with us today, who’s going to be the interviewer, and who’s going to introduce the speaker or the person who he’s going to speak with. We just came from Smashing Con, San Francisco two weeks ago. It was a wonderful experience because Jason would just come on stage, sit down, take a cup of coffee, work through his design process and stuff. And he’s very curious, right? This is something that you need in a person who can run interviews really well. You might see Jason more often in the future. Maybe, Jason, you can introduce yourself. What do you do for life? What’s the meaning of life for you?

Jason: Well, I suppose in the order of frequency, it’s spending time with my wife, walking the dogs which, most people see on Instagram, riding my bike, and then a whole lot of stuff about typography. Which, is what I was demonstrating when I was at Smashing, San Francisco. The thing that is sort of common for me that runs through is just being curious about stuff and learning new things so the chance to actually learn from more amazing people who are gonna be getting on stage at other Smashing events was too good to pass up. So, I’m pretty excited about this.

Vitaly: We couldn’t be more excited to have you. I think it’s time for me to get my breakfast. I’m sorry, I’m so hungry. I woke up four hours ago, was all about meetings and Jason will take over. Jason, have a wonderful time. I’m looking forward to seeing you when they’re wrapping up this session. Okay? Jason, the stage is yours.

Jason: Thanks, Vitaly. Well, I’m super excited about this for a whole bunch of reasons. But, the main one is I get to introduce to you someone who, correct me if I’m wrong, but I think this is the first time you’re speaking at a Smashing Event? Is that true?

Jenny Chen: Yes. It is the first time.

Jason: Okay. Well, The voice that you’re hearing and the face that you’re seeing is Jenny Chen who is a UX and localization consultant who’s worked with all kinds of big brands including Neiman Marcus, Crate and Barrel, and IBM. In the course of your travels over the web of a number of years which has some pretty amazing lists of credentials. I mean, some things that really stood out to me, that actually I think kind of made you a little bit more compelling in terms of who I really wanted to talk to first: is that not only are you doing all of this incredible work but you’re also a regional director for EMEA for Ladies UX, which is an amazing organization, and you also started your own mentorship program. That teaching aspect, you know, I think is one of the things that I love about getting up on stage and giving talks and workshops and stuff. So, before we actually jump into what you’re gonna be talking about, I’d really love to hear a little bit more from you, about your journey from Taipei, to where you are now, to how you came to be in this industry.

Jenny Chen: Yeah, sure. Thank you, Jason, for the amazing introduction. Yeah. So, as you were saying, I started from Taipei. I was born in Taipei, Taiwan. My journey was…I moved around in a lot of places. My family moved to Canada and I studied there. I studied in Vancouver, Canada.

Jason: Oh, wow.

Jenny Chen: Yeah. I studied Interaction Design. At the time it was like Human-Computer Interaction.

Jason: Right.

Jenny Chen: And then I moved to Singapore and now I’m based in the Netherlands consulting regarding UX projects/localization projects. And just like you mentioned, I am a volunteer EMEA director at Ladies UX and I also run my own mentorship program in the spare time. Yeah. I’ve also been speaking in [crosstalk 00:04:59]

Jason: Because you must have a load of spare time then? So, tell me a little bit about the typical day for you if there is one.

Jenny Chen: Mm-hmm (affirmative) Typical day. These days I have more of a typical day because I’m working with clients and then I am basically just taking my to-do list and doing the job that can help the organization, can help shape product strategy, offer feedback to designers, do some consulting on localization, working on research. And, yeah, like a typical day I could be reviewing a design, giving feedback to my design team. I could be helping a client with more of an approach to hire a designer and I could be running a workshop on product strategy, like really talking about, “This is model canvas and valid composition.” And some days I’m drafting a user research strategy and on some days I am flying over to a different country to actually conducting on-site localization and culture research. So, yeah, there’s not really a typical day because I really do different types of work, types of projects, and I get to work with really amazing clients.

Jason: That’s amazing. I’ve looked at your resume. Your speaking schedule last year was incredible. You were at some of the best events on the web. You were speaking on all kinds of different things. It makes me feel so monotonous. All I ever do is talk about web typography and you seem to cover an incredible range of topics. That really is fascinating to me. And I love that your focus is so well-rounded in that it’s not just about UX it’s also about how design can impact the business. And that’s something that I think is really fascinating and it’s really starting to gain a lot of prominence with research from InVision and McKinsey about what design can bring to the rest of the organization. So, how long has that been more of the focus about business model innovation and all those kinds of strategic topics?

Jenny Chen: Yeah. I actually just transitioned from a designer to a strategist in a little bit more than a year ago. I’d been in the design-

Jason: Really?

Jenny Chen: Yeah. I’d been in the design industry for like six, seven years and I’d been doing, you know, wireframing, the same type of thing the designer would do. Wireframes, prototypes, icons, and stuff like that and it was to the point I really wanted to be more involved on the business side of things. Now that I’m in this role for like more than a year, I really see how being more business-minded and being aware of the business goals and how that needs to work together with a strategy and a design to actually move the needle. Really, the starting point just because I’d been a designer for like six, seven years and I really want to do more. I really want to actually see the impact of my designs. So, that seems like the natural step. And I think learning from a lot of experts from my community as well as going to different conferences and listen and learn from those people who do strategy and are leading design. So, I’m very honored to have a chance to be in those conferences and learn from these leaders.

Jason: That’s really amazing. I hope we’ll have time to come back to that a little bit because I think a lot of designers, as they advance in their career, really look for ways that they can achieve a greater level of impact than just this one project they’re working on. And I think it’s really hard for designers to kind of figure out where they can go.

Jenny Chen: Yeah.

Jason: So, that’s amazing to hear that you’ve made such a great transition. I can’t help but think that there’s a really great relationship between multi-lingual, multi-cultural and localization as this sort of central part of business strategy and how it relates to design and I gather that’s kind of what you’re gonna be talking about in San Francisco. Is that…I’m sorry, in Toronto. Is that true?

Jenny Chen: Yeah. So, my talk will be on moreover how culture affects the design and then I’ll also be touching on how…what are some of the reasons…how can companies benefit from localization. How can companies benefit from expanding to a new market? So these are the type of things that I want to talk about in my talk in Toronto as well as showcasing some case studies. How do reputable companies…how do big companies approach localization and market expansion? Because I have been doing this specifically designs with multiple cultures since 2013 and I’ve definitely learned a lot and then also learned from the companies who are really experts in doing internationalization and localization. So, yeah. I am really excited to share about this.

Jason: That’s really great. And I think for a lot of people, when they think about a language addition to a website, they kind of lump adding a language into what people refer to when they say internationalization. But I know I learned a ton when I listened to Robin Larson from Shopify talk about their work over the past year or so in adding multiple languages to their system. But the phrase you used was localization and that was the thing that really stuck out to me and what I wanted to ask you about because that was something Robin spoke about where it’s not just the language but it’s the phrasing and it’s the cultural things about other aspects of design. So, I’d love to hear more about what that means to you when you’re designing and the kinds of things that you consider in adding a language…whether it’s English and Chinese or Korean or whatever the other kind of cultural implications that go along with that.

Jenny Chen: Yeah. So, regarding localization, for me, it means in all kinds of ways how to adapt a product, an interface, an application to meet the needs, the preferences, expectations, and behaviors of the local users. Like you mentioned, it’s not just about translation, but there are many things from icons, from symbols and colors and sometimes you have text direction and of course the content…all these sort of things that can help a local user feel like, “Hey, this app or this software is designed with me in mind. It’s just not some foreign company…they only hired some translators and they expect me to feel connected to the product.” So, localization, that’s what it means for me and that’s the kind of work that I like to do.

Jason: Mm-hmm (affirmative) And so how often is that work…just for frame of reference, I’ve mostly worked on web content management systems. So, when that first comes up, the first thing that comes to mind is, “Okay. I need to add a language pack. I need to factor this into the language encodings for the theme,” and that sort of thing. But I know there’s a lot of other considerations and there’s a whole range of what people work with. From things that are sort of static sites, where you have a lot of freedom to customize things. But I think a lot of us end up dealing with either its an app infrastructure or a website infrastructure that has to support those multiple languages. So what kind of scenarios have you had to deal with in terms of the technology behind…you know and how you…I’m trying to phrase this better. You know sort of implementing that design and finding the freedom to change the icons to change the phrasing to chan — you know, to make it feel connected. Are you often involved in the technical implementation of that or at least mapping things properly?

Jenny Chen: So actually, on a technical side, not really and there is really different kinds of clients. And then some of them I come into a project and they have already things mapped out, and then usually when I come in is when they have decided a market, or maybe they are thinking about localization. They haven’t decided what market, but they have the infrastructures in place, so I can’t really speak to about the technical infrastructure. But then I’m thinking like what might be useful for someone to know about like why rolling the process, and how to actually even think about, “Well should we change this icon?” It’s all related to — We should think about the business case of localization. I mean we don’t do it just because we can we don’t do it just because its fun, but localization or expanding to a different market or supporting multiple languages. There must be, well, there should be a business reason behind it: is because we want to expand markets, we want to expand to a different market, we want to reach the users, and definitely we are hoping for some success matrix from that market. And if we deem that this is a market that is likely to succeed, or we want to experiment and then the users of that culture/of that market, and they have a strong tendency to let say go for applications that are feel more native, feel more intuitive. And as user experience practitioners like we know that designing the user experience like is going to make the users more loyal, more engaged. So its also considering like the user experience business matrix to decide: Okay, do we want to have this customization available, do we even want to customize it, or do we just want to go for like the minimum localization effort which typically is translations and content localization.

Jason: And so how often does it go further than that? So I mean the things that come to mind that we had kinda gone back and forth a little bit in the questions beforehand: language length, or color, or typography, or layout. How often does that come into play?

Jenny Chen: Mm-hmm (affirmative) That’s a really good question. I would say that it really depends on the industry, it depends on the company stage, it also depends on like where are they, what their business goals are. For just a start-up it’s unlikely that they will fully customize it. They may even not expand into multiple markets when they are just figuring out their product market fit. But lets say for a really established company like Spotify, Shopify as well, they are… they already have a like a market that’s a home market that’s doing really well, and they want to expand and for the — for some target market where they have a really distinct culture like Japan for example, where there’s a lot of different like influences and that can actually affect the layout or the localization element, for example, Singapore or China. And then we look at evaluating what is the — what do we have to do to be successful in the market? For some market, it might not be necessary, like maybe for some markets, they might require less changes than the other. So, I would say, this is a really — it depends, kind of [inaudible 00:17:06] answer to kind of, for us to know, what is actually required and how often does it actually go beyond the basic localization?

Jason: Right, and so in your role, sort of advising your clients on these sorts of things… Do you actually go so far as think about what would the team look like that could do this successfully? Like, what kinds of designers and skills sets you would want to see, to help them be successful?

Jenny Chen: Hmm. Yeah, in my experience, the localization team — then again, depending on the state, depending on — are they in the beginning of setting up the team? Let’s say if they haven’t gotten a team set-up, usually, there is a localization team that takes care of the localization elements, or maybe some, to make sure there is consistency but there is also certain customization elements with the differing market but while the other product team could be focused on specific features. So let’s say like, the whole market team will design the checkout flow, the localization team will then take that checkout flow and customize it for a different market. And, depending on the company size, some more established companies, they could have like the Germany team, the Netherlands team, the Nordics team, the Latin team, to actually hire people who are aware of the culture differences, of the local expectations, the legal requirements and all those things that can actually make or break the product. They either hire people on the ground, or they hire people with that experience, with that knowledge, in their office.

Jason: Right.

Jenny Chen: But there’s really multiple ways we could go about it. What’s really most necessary is people with that knowledge, people with that cultural understanding who can actually design for that target market.

Jason: That’s great. I think that leads into a couple of other things that I really wanted to ask you about. One is, I mean, your background is so geographically varied. How much has that influenced your career direction, in terms of what interests you, and the kinds of things you’ve wanted to focus on?

Jenny Chen: When I was still studying [inaudible 00:19:36] people always like set out their career goal, and what they wanted to be, in 5 years, 10 years, what I want to do. I honestly have never thought that I would be in this industry, in the localization industry. And, I really love what I do, and I think the reason why I’m doing this and maybe like, what shaped my path going here is just having curiosity, you know, towards other cultures and towards the world. I guess, as I traveled more and more, my mind started to open and to really understand cultural differences, the local ways of life. And, being a UX designer like understanding how important it is to have our product user-centered. Then I look at people who are living in other countries and I see, you know, what kind of things do they actually use: What kind of apps, what kind of website, and how that’s so different from what we know and what we’re accustomed to. That’s one of the reasons, curiosity. I really love to travel and I also have moved to many countries just to really be immersed in the local culture, really connect with the local people, try to learn some local language. I’m terrible at Dutch (laughs), but I try where I can. I think it really has enriched my life, it really has enriched my professional experience. I mean, when I moved to Singapore, that’s actually how it gave me the opportunity to design for Malaysia, the Philippines, and Indonesia and countries in that region. When I moved to Amsterdam, I was able to design for Spain, and France, and Germany, and Turkey, like all the countries in this region. I feel very blessed and I really love what I do. I think, again, my curiosity and passion for traveling definitely have played a role in this.

Jason: Yeah, sure sounds like it. So, if you were to try and take — well, so, there’s two parts to this one: I’m wondering if there’s something that you would want, if you could go back to do differently? Like, is there something you had wished you learned more. You’ve moved into business strategy quite a bit more, do you wish you would have studied business? What are the sorts of things that you’re looking to fill in now, that you maybe wished you had learned earlier?

Jenny Chen: Yeah, I think about this sometimes. I think it might have been quite helpful if I studied business administration. But, at the same time, having a degree in design, and having a solid training background in research…I think that’s also a huge asset. Often times I talk to clients and they actually need a researcher. They need somebody who has done this a lot, and somebody who understands the science behind user interface, visibility test and how to like, minimize bias in the whole research process. So I feel like, maybe I should’ve studied business but at the same time, I’m also really happy that I studied design.

Jason: Sure.

Jenny Chen: But, something I’m definitely trying to make up where I don’t have so much expertise regarding the business side. It’s just that I am talking to experts in this area, I’m reading books and listening to podcasts. But definitely, if someone who wants to take on a strategist role, I would say that would be really helpful. Right now that’s actually something that, rather than the design and what tools to use, I’m definitely more interested to learn about the business side of things.

Jason: Mm-hmm (affirmative). At an agency I worked at a few years ago, a bunch of us actually took a Coursera class together, and had a little discussion every week about — It was an MBA focused program to learn about business models, structures, and what is the business model canvas and all those kinds of things. That was really fascinating, I certainly appreciated that. So, the other side of that last question was: your advice to designers who are looking to do more work like this. What are the kinds of things that if a designer wants to understand localization more, and start to move into this world, what kinds of advice would you have for them?

Jenny Chen: I think one thing that quite helped me to do my work in localization is just to be, again, be curious. Not just curious, and physically traveling. Let’s say at the designer who might not have the opportunity to go abroad and do a research trip in another country, we can at least look at international tech news. I still say in [inaudible 00:24:38] my contact in Singapore, and I read tech news in South East Asia, in Taiwan, in other countries where there are English versions available or at least in a language that I can read. You can also download apps or go on websites and really just try to be more aware of how designs or how the software can be different. And, definitely keep an eye out what the other companies are doing in other markets. That is definitely really interesting. We can follow that news like tech crimes to next lab, there’s a lot of news sources, just to keep an eye out and also learn on what people are actually doing regarding localization.

Jason: That’s amazing, great. That’s awesome advice, thank you. The last thing I’m going to ask you about — I think we’re probably getting close to a good time to wrap up but, for you, now, with all these things that you’re doing, what’s getting you really excited? What’s the new thing that you see coming that you’re really excited to learn about and incorporate in the work that you do?

Jenny Chen: Something that’s really new and really exciting… For me personally, I’m just really happy that more and more people are thinking about localization and sharing that knowledge. Like what you just said, Robin is great, and I really like the work that she does, and so people like her, people like me, who are sharing/ raising the awareness of the importance of considering the local cultures, considering the nuances when developing a localized product. Overall, I’m just happy that people are raising awareness of this issue. I really hope more and more companies who actually doing it would be on-stage, or like writing or speaking more about it so other people can ultimately learn from the successful companies. I’m sure like, Facebook does a lot of things, DropBox does a lot of things, but then it’s just that so far we haven’t seen people actively talking about localization or internationalization, so that’s something I’m really excited about.

Jason: That’s great. Well, this has been absolutely amazing, I can’t thank you enough. For anyone who is going to be in Toronto, I hope that — if any of you are listening, I hope that you take this to heart. Go say hi to Jenny, tell her how much this work has influenced you. It’s such a big part of being at these events to be able to people and learn more about what they’re working on. Don’t hesitate, that’s what we’re all there for. We’re all learning together, some of us have just read a couple of pages ahead, and we’re happy to share what we’ve learned. Thank you so much, Jenny, this has been amazing.

Jenny Chen: Thank you so much Jason. I’m so happy to take part in this, thank you.

Vitaly: Thank you so both of you for actually making it all happen. Wonderful interview and also wonderful insights from you Jenny, thank you so much for that. Just a quick note from me, this is just one of the little sessions that we have about people who are going to kind of speak at our conferences but also just interesting people doing interesting work. This is important. I think at this point there’s a lot of failure of highlighting people who are kind of passionately and working hard behind the scenes doing incredible work to change the world. So this is kind of just our humble attempt to bring a little bit of spotlight to those people. With this in mind, thank you so much for watching. Looking forward to the next one.

That’s A Wrap!

We’re looking forward to welcoming Jenny at SmashingConf Toronto 2019, with a live session on designing for users across cultures. We’d love to see you there as well!

Please let us know if you find this series of interviews useful, and whom you’d love us to interview, or what topics you’d like us to cover and we’ll get right to it!

Smashing Editorial
(ra, il)

Source: Smashing Magazine, Designing For Users Across Cultures: An Interview With Jenny Shen

Monthly Web Development Update 5/2019: Over-Complication And Performative Workaholism

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 5/2019: Over-Complication And Performative Workaholism

Monthly Web Development Update 5/2019: Over-Complication And Performative Workaholism

Monthly Web Development Update 5/2019: Over-Complication And Performative Workaholism

Anselm Hannemann



This week, I was at the amazing beyondtellerrand conference once again, and every single time I come home from such an event, I try to understand our industry and our society better. There’s so much input and inspiration around, I meet a lot of friends and people I see only once a year, I listen to great talks. People tell me how frustrated they are with their jobs, we hear amazing stories of people who seem to have an amazing life, we hear people moaning about bad players on the web, but rarely do we hear real insights or solutions.

Presentations highlighting the good parts and uncommon paths in life are quite rare, but one of the exceptions is Rob Draper’s beyondtellerand talk in which he shares his story and how an unexpected series of events created the role he is in today. And, well, I’m glad that there are amazing people who believe in humans and share how we all as individuals can do something to have a better job and life: It might be as Stephen Hay suggests to trust your own ideas, building your own website and social system, or, as my good friend Andy is doing it, building a non-profit initiative to build schools in Africa, a project into which he invests not only a lot of time but money as well.

It’s great to see these visions of a better world, and it feels like a good community to be in. The web is so much more than just a space to build technical solutions and write code; it’s a place to create helpful, meaningful, and beautiful individual things.

News

  • Let’s make things official: Safari 12.1 now supports Dark Mode. Check the full article for how to apply it to your pages or take a look at one of the sites like Twitter or Colloq that already support it. Safari’s Developer Tools feature a debug mode for Dark Mode now as well.
  • Chrome 74 is public. The new version lets us detect if a user requested reduced motion and the Feature Policy API got updates, too, so now we can request document.featurePolicy.allowedFeatures() for all allowed features, allowsFeature() for single features, or document.featurePolicy.getAllowlistForFeature() for a domain list that gets the allowed features.
  • Googlebot is evergreen now. This means that Google’s search crawler gets the newest Chromium version automatically. From now on, it supports ES6, ECMAScript Modules and newer functionality and understands lazy-loaded content via IntersectionObserver and the WebComponents v1 APIs. It might be time to drop our ES6 transpilers soon.
  • The Web Share API is a nice addition to make more use of websites. And while it has been available on Chrome for Android for quite some time now, Safari is bringing the feature to macOS and iOS in its latest version.

General

  • Stefan Judis shares a roundup article on how to keep the web a safe place, making it affordable and fast and tailoring the response to the user — all with HTTP headers. A good read for everybody as we all tend to forget about these things in our daily work.
  • The annual Mozilla 2019 Internet Health Report examines how humanity and the internet intersect. Here’s the report itself with some short answers for those who don’t want to read it completely.
  • On-call rotation is a common thing in tech, and I know that a lot of teams struggle with it. That’s why I found this guide on “On-call at any size” quite informative and useful. It explains how to prepare and what to do — no matter if you’re a small team or part of a big corporation.
  • Emily Shaffer shares how to annotate regular expressions to make them comprehensible for others as well.

Stick figures showing how many people are online and how many offline in which part of the world. Most people who are online come from Asian and Pacific countries, followed by the Americas.

If there were only 100 people in the world, who would be online? That’s only one of the questions which Mozilla’s Internet Health Report 2019 answers. (Image credit)

UI/UX

Paths to simplification illustrated with circles and arrows. Subtract, Consolidate, Redistribute, Prioritize, Clarify.

How do you fix the UX of a product that has become overly complicated? Patrick Faller shows paths to simplification. (Image credit)

Tooling

  • GitHub is completing the experience by integrating their own npm registry (but also ruby, Docker, Maven, NuGet) into the platform. This is a huge step as it makes publishing custom and private packages a lot easier.

Privacy

Security

  • The Google AMP project announced that they’re going to “simplify” AMP domains in Google Chrome. This means that users would see the original URL in the browser bar while really being on a Google AMP server. An interesting approach, given the fact that this is something that browser vendors usually don’t allow in order to prevent URL spoofing.

Accessibility

  • stylelint-a11y is a plugin for stylelint that enforces accessibility best practices via the CSS linter.

JavaScript

CSS

Work & Life

  • How do productivity and promises correlate? In times of constant demands, too much work to do, and blurry information about priorities and different senses of urgency, you can hardly blame people for breaking with their promises anymore. If we’re constantly confronted with other people’s expectations like “please get back to me by 1 PM today”, how can we stick to our original schedule for the day and be productive? Should we ignore such external demands and say “we had better things to do” than replying to the non-urgent but urgency-creating email “in time”? It definitely takes some courage to do so, but in the end, this is what productivity is about: sticking to a schedule and dedicating focus time to one single task.
  • When did performative workaholism become a lifestyle? The New York Times takes a closer at the culture of business, hustling, and the weird love we develop for working faster and more. But what about our lives when we work for 12 or 18 hours a day? And what about that promise that automation will take off the work from us?
  • Do you do standup calls? Here’s why this is a costly thing that even hurts your teammates’ efficiency.
  • Stop being so busy and just do nothing. Trust us.” This claim in the New York Times has its reasons: In a world of stress and an environment where we embrace working all day, we need to remember to stop and take time for ourselves.
  • We love to tend to make judgments about other people’s work. That’s why we tend to declare something as “low-hanging fruit,” assuming that the task is easy to do and doesn’t take much time or effort. But we forget that we might miss a couple of circumstances and it might become a bigger task than anticipated. Jason Fried says that we should be careful when we use the word “easy” to describe other people’s jobs.
  • The founder of ConvertKit, Nathan Barry, shares a couple of insights into how they run the business in an unconventional way: They pay standardized salaries, make their revenue public, and distribute 60% of company profits to the team.

Screenshot from the New York Times article ‘Why are young people pretending to love work’. Under the heading, there’s a propaganda-poster style illustration of three young people holding laptops, phones, and tablets, making a fist with their right hand. The background of the poster says ‘Hustle’.

When did performative workaholism become a lifestyle? The New York Times dedicated an article to the topic. (Image credit)

Going Beyond…

  • “If anything about this age is rare, perhaps it is the possibility that our fraught networked systems have finally reached such a unique point, with their environmental and social consequences so visibly intertwined, that they have become impossible to ignore.” — Ingrid Burrington in “A rare and toxic age.”
  • Let’s hand over the best possible. The best environment for the next generation. The best work for the employees that take over work from you. Keep it at heart for every aspect of life, and you’ll see that it makes a difference. To other people and to you. It feels good to do good.
  • What’s low-tech, sustainable, and possibly the most effective thing we can do to fight climate change? Planting trees. A trillion of them.
  • What are we doing to our earth? It seems despite the rising awareness of plastic pollution, global sales of plastic and glass bottles, cans, and cartons are still rising. There are so many alternatives, can we please stop buying one-time plastic packaging and coffee to-go — each of us, now?
  • When we feel overloaded, we tend to lash out at someone in frustration and anger. This comes from the hope that things will be calm, orderly, simple, solid, and under control. However, the world doesn’t comply with this hope, as it is chaotic, constantly changing, never fixed, groundless. So we get anxious and angry at others. But we can create a habit of calm when feeling frustrated.
  • What energy impact does your phone, that small screen you hold in your hands every day, have? We use video calls, messengers, or upload our photos to the cloud. But all the cloud services, the 4G network itself use a huge amount of energy that we tend to forget about. This article dives deeper into the dependencies of using a smartphone these days, and why it matters to save data and reduce your phone usage — and if it’s just for your own sake.

One more thing: If you like my reading lists, please consider making a donation. Donating to Makuyuni counts as well.

—Anselm

Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 5/2019: Over-Complication And Performative Workaholism

Collective #516

dreamt up by webguru in Uncategorized | Comments Off on Collective #516


C516_subgrid

Subgrid

Learn all about the new subgrid value of CSS Grid in this MDN guide.

Read it


Divi

Our Sponsor

Divi: The Powerful Visual Page Builder

Divi is a revolutionary WordPress theme and visual page builder for WordPress. With Divi, you can build your website visually. Add, arrange and design content and watch everything happen instantly right before your eyes.

Try it








C516_Ola

Ola

A smooth animation library for inbetweening/interpolating numbers in realtime.

Check it out





C516_handui

HandUI

A repo by Eugene Krivoruchko that contains examples of UI design for tracked hands in AR/VR featuring remote interaction.

Check it out









C516_icons

Flight Icons

Flight by Brodie Pointon is an animated icon pack built for iOS, Android and web (using the Lottie Framework) and video. Built on the back of Feather Icons by Cole Bemis.

Get it


Collective #516 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #516

Creating Your Own React Validation Library: The Basics (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on Creating Your Own React Validation Library: The Basics (Part 1)

Creating Your Own React Validation Library: The Basics (Part 1)

Creating Your Own React Validation Library: The Basics (Part 1)

Kristofer Selbekk



I’ve always thought form validation libraries were pretty cool. I know, it’s a niche interest to have — but we use them so much! At least in my job — most of what I do is constructing more or less complex forms with validation rules that depend on earlier choices and paths. Understanding how a form validation library would work is paramount.

Last year, I wrote one such form validation library. I named it “Calidation”, and you can read the introductory blog post here. It’s a good library that offers a lot of flexibility and uses a slightly different approach than the other ones on the market. There are tons of other great libraries out there too, though — mine just worked well for our requirements.

Today, I’m going to show you how to write your very own validation library for React. We will go through the process step by step, and you’ll find CodeSandbox examples as we go along. By the end of this article, you will know how to write your own validation library, or at the very least have a deeper understanding of how other libraries implement “the magic of validation”.

  • Part 1: The Basics
  • Part 2: The Features
  • Part 3: The Experience

Step 1: Designing The API

The first step of creating any library is designing how it’s going to be used. It lays the foundation for a lot of the work to come, and in my opinion, it’s the single most important decision you’re going to make in your library.

It’s important to create an API that’s “easy to use”, and yet flexible enough to allow for future improvements and advanced use cases. We’ll try to hit both of these goals.

We’re going to create a custom hook that will accept a single configuration object. This will allow for future options to be passed without introducing breaking changes.

A Note On Hooks

Hooks is a pretty new way of writing React. If you’ve written React in the past, you might not recognize a few of these concepts. In that case, please have a look at the official documentation. It’s incredibly well written, and takes you through the basics you need to know.

We’re going to call our custom hook useValidation for now. Its usage might look something like this:

const config = {
  fields: {
    username: {
      isRequired: { message: 'Please fill out a username' },
    },
    password: {
      isRequired: { message: 'Please fill out a password' },
      isMinLength: { value: 6, message: 'Please make it more secure' }
    }
  },
  onSubmit: e => { /* handle submit */ }
};
const { getFieldProps, getFormProps, errors } = useValidation(config);

The config object accepts a fields prop, which sets up the validation rules for each field. In addition, it accepts a callback for when the form submits.

The fields object contains a key for each field we want to validate. Each field has its own config, where each key is a validator name, and each value is a configuration property for that validator. Another way of writing the same would be:

{
  fields: {
    fieldName: {
      oneValidator: { validatorRule: 'validator value' },
      anotherValidator: { errorMessage: 'something is not as it should' }
    }
  }
}

Our useValidation hook will return an object with a few properties — getFieldProps, getFormProps and errors. The two first functions are what Kent C. Dodds calls “prop getters” (see here for a great article on those), and is used to get the relevant props for a given form field or form tag. The errors prop is an object with any error messages, keyed per field.

This usage would look like this:

const config = { ... }; // like above
const LoginForm = props => {
  const { getFieldProps, getFormProps, errors } = useValidation(config);
  return (
    <form {...getFormProps()}>
      <label>
        Username<br/>
        <input {...getFieldProps('username')} />
        {errors.username && 
{errors.username}
} </label> <label> Password<br/> <input {...getFieldProps('password')} /> {errors.password &&
{errors.password}
} </label> <button type="submit">Submit my form</button> </form> ); };

Alrighty! So we’ve nailed the API.

Note that we’ve created a mock implementation of the useValidation hook as well. For now, it’s just returning an object with the objects and functions we require to be there, so we don’t break our sample implementation.

Storing The Form State 💾

The first thing we need to do is storing all of the form state in our custom hook. We need to remember the values of each field, any error messages and whether or not the form has been submitted. We’ll use the useReducer hook for this since it allows for the most flexibility (and less boilerplate). If you’ve ever used Redux, you’ll see some familiar concepts — and if not, we’ll explain as we go along! We’ll start off by writing a reducer, which is passed to the useReducer hook:

const initialState = {
  values: {},
  errors: {},
  submitted: false,
};

function validationReducer(state, action) {
  switch(action.type) {
    case 'change': 
      const values = { ...state.values, ...action.payload };
      return { 
        ...state, 
        values,
      };
    case 'submit': 
      return { ...state, submitted: true };
    default: 
      throw new Error('Unknown action type');
  }
}

What’s A Reducer? 🤔

A reducer is a function that accepts an object of values and an “action” and returns an augmented version of the values object.

Actions are plain JavaScript objects with a type property. We’re using a switch statement to handle each possible action type.

The “object of values” is often referred to as state, and in our case, it’s the state of our validation logic.

Our state consists of three pieces of data — values (the current values of our form fields), errors (the current set of error messages) and a flag isSubmitted indicating whether or not our form has been submitted at least once.

In order to store our form state, we need to implement a few parts of our useValidation hook. When we call our getFieldProps method, we need to return an object with the value of that field, a change-handler for when it changes, and a name prop to track which field is which.

function validationReducer(state, action) {
  // Like above
}

const initialState = { /* like above */ };

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);
  
  return {
    errors: state.errors,
    getFormProps: e => {},
    getFieldProps: fieldName => ({
      onChange: e => {
        if (!config.fields[fieldName]) {
          return;
        }
        dispatch({ 
          type: 'change', 
          payload: { [fieldName]: e.target.value } 
        });
      },
      name: fieldName,
      value: state.values[fieldName],
    }),
  };
};

The getFieldProps method now returns the props required for each field. When a change event is fired, we ensure that field is in our validation configuration, and then tell our reducer a change action took place. The reducer will handle the changes to the validation state.

Validating Our Form 📄

Our form validation library is looking good, but isn’t doing much in terms of validating our form values! Let’s fix that. 💪

We’re going to validate all fields on every change event. This might not sound very efficient, but in the real world applications I’ve come across, it isn’t really an issue.

Note, we’re not saying you have to show every error on every change. We’ll revisit how to show errors only when you submit or navigates away from a field, later in this article.

How To Pick Validator Functions

When it comes to validators, there are tons of libraries out there that implement all the validation methods you’d ever need. You can also write your own if you want. It’s a fun exercise!

For this project, we’re going to use a set of validators I wrote some time ago — calidators. These validators have the following API:

function isRequired(config) {
  return function(value) {
    if (value === '') {
      return config.message;
    } else {
      return null;
    }
  };
}

// or the same, but terser

const isRequired = config => value => 
    value === '' ? config.message : null;

In other words, each validator accepts a configuration object and returns a fully-configured validator. When that function is called with a value, it returns the message prop if the value is invalid, or null if it’s valid. You can look at how some of these validators are implemented by looking at the source code.

To access these validators, install the calidators package with npm install calidators.

Validate a single field

Remember the config we pass to our useValidation object? It looks like this:

{ 
  fields: {
    username: {
      isRequired: { message: 'Please fill out a username' },
    },
    password: {
      isRequired: { message: 'Please fill out a password' },
      isMinLength: { value: 6, message: 'Please make it more secure' }
    }
  },
  // more stuff
}

To simplify our implementation, let’s assume we only have a single field to validate. We’ll go through each key of the field’s configuration object, and run the validators one by one until we either find an error or are done validating.

import * as validators from 'calidators';

function validateField(fieldValue = '', fieldConfig) {
  for (let validatorName in fieldConfig) {
    const validatorConfig = fieldConfig[validatorName];
    const validator = validators[validatorName];
    const configuredValidator = validator(validatorConfig);
    const errorMessage = configuredValidator(fieldValue);

    if (errorMessage) {
      return errorMessage;
    }
  }
  return null;
}

Here, we’ve written a function validateField, which accepts the value to validate and the validator configs for that field. We loop through all of the validators, pass them the config for that validator, and run it. If we get an error message, we skip the rest of the validators and return. If not, we try the next validator.

Note: On validator APIs

If you choose different validators with different APIs (like the very popular validator.js), this part of your code might look a bit different. For brevity’s sake, however, we let that part be an exercise left to the reader.

Note: On for…in loops

Never used for...in loops before? That’s fine, this was my first time too! Basically, it iterates over the keys in an object. You can read more about them at MDN.

Validate all the fields

Now that we’ve validated one field, we should be able to validate all fields without too much trouble.

function validateField(fieldValue = '', fieldConfig) {
  // as before
}

function validateFields(fieldValues, fieldConfigs) {
  const errors = {};
  for (let fieldName in fieldConfigs) {
    const fieldConfig = fieldConfigs[fieldName];
    const fieldValue = fieldValues[fieldName];

    errors[fieldName] = validateField(fieldValue, fieldConfig);
  }
  return errors;
}

We’ve written a function validateFields that accepts all field values and the entire field config. We loop through each field name in the config and validate that field with its config object and value.

Next: Tell our reducer

Alrighty, so now we have this function that validates all of our stuff. Let’s pull it into the rest of our code!

First, we’re going to add a validate action handler to our validationReducer.

function validationReducer(state, action) {
  switch (action.type) {
    case 'change':
      // as before
    case 'submit':
      // as before
    case 'validate': 
      return { ...state, errors: action.payload };
    default:
      throw new Error('Unknown action type');
  }
}

Whenever we trigger the validate action, we replace the errors in our state with whatever was passed alongside the action.

Next up, we’re going to trigger our validation logic from a useEffect hook:

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);

  useEffect(() => {
    const errors = validateFields(state.fields, config.fields);
    dispatch({ type: 'validate', payload: errors });
  }, [state.fields, config.fields]);
  
  return {
    // as before
  };
};

This useEffect hook runs whenever either our state.fields or config.fields changes, in addition to on first mount.

Beware Of Bug 🐛

There’s a super subtle bug in the code above. We’ve specified that our useEffect hook should only re-run whenever the state.fields or config.fields change. Turns out, “change” doesn’t necessarily mean a change in value! useEffect uses Object.is to ensure equality between objects, which in turn uses reference equality. That is — if you pass a new object with the same content, it won’t be the same (since the object itself is new).

The state.fields are returned from useReducer, which guarantees us this reference equality, but our config is specified inline in our function component. That means the object is re-created on every render, which in turn will trigger the useEffect above!

To solve this, we need to use for the use-deep-compare-effect library by Kent C. Dodds. You install it with npm install use-deep-compare-effect, and replace your useEffect call with this instead. This makes sure we do a deep equality check instead of a reference equality check.

Your code will now look like this:

import useDeepCompareEffect from 'use-deep-compare-effect';

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);

  useDeepCompareEffect(() => {
    const errors = validateFields(state.fields, config.fields);
    dispatch({ type: 'validate', payload: errors });
  }, [state.fields, config.fields]);
  
  return {
    // as before
  };
};
A Note On useEffect

Turns out, useEffect is a pretty interesting function. Dan Abramov wrote a really nice, long article on the intricacies of useEffect if you’re interested in learning all there is about this hook.

Now things are starting to look like a validation library!

Handling Form Submission

The final piece of our basic form validation library is handling what happens when we submit the form. Right now, it reloads the page, and nothing happens. That’s not optimal. We want to prevent the default browser behavior when it comes to forms, and handle it ourselves instead. We place this logic inside the getFormProps prop getter function:

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);
  // as before
  return {
    getFormProps: () => ({
      onSubmit: e => {
        e.preventDefault();
        dispatch({ type: 'submit' });
        if (config.onSubmit) {
          config.onSubmit(state);
        }
      },
    }),
    // as before
  };
};

We change our getFormProps function to return an onSubmit function, that is triggered whenever the submit DOM event is triggered. We prevent the default browser behavior, dispatch an action to tell our reducer we submitted, and call the provided onSubmit callback with the entire state — if it’s provided.

Summary

We’re there! We’ve created a simple, usable and pretty cool validation library. There’s still tons of work to do before we can dominate the interwebs, though.

Stay tuned for Part 2 next week!

Smashing Editorial
(dm, il)

Source: Smashing Magazine, Creating Your Own React Validation Library: The Basics (Part 1)

Creating Your Own React Validation Library: The Basics (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on Creating Your Own React Validation Library: The Basics (Part 1)

Creating Your Own React Validation Library: The Basics (Part 1)

Creating Your Own React Validation Library: The Basics (Part 1)

Kristofer Selbekk



I’ve always thought form validation libraries were pretty cool. I know, it’s a niche interest to have — but we use them so much! At least in my job — most of what I do is constructing more or less complex forms with validation rules that depend on earlier choices and paths. Understanding how a form validation library would work is paramount.

Last year, I wrote one such form validation library. I named it “Calidation”, and you can read the introductory blog post here. It’s a good library that offers a lot of flexibility and uses a slightly different approach than the other ones on the market. There are tons of other great libraries out there too, though — mine just worked well for our requirements.

Today, I’m going to show you how to write your very own validation library for React. We will go through the process step by step, and you’ll find CodeSandbox examples as we go along. By the end of this article, you will know how to write your own validation library, or at the very least have a deeper understanding of how other libraries implement “the magic of validation”.

  • Part 1: The Basics
  • Part 2: The Features
  • Part 3: The Experience

Step 1: Designing The API

The first step of creating any library is designing how it’s going to be used. It lays the foundation for a lot of the work to come, and in my opinion, it’s the single most important decision you’re going to make in your library.

It’s important to create an API that’s “easy to use”, and yet flexible enough to allow for future improvements and advanced use cases. We’ll try to hit both of these goals.

We’re going to create a custom hook that will accept a single configuration object. This will allow for future options to be passed without introducing breaking changes.

A Note On Hooks

Hooks is a pretty new way of writing React. If you’ve written React in the past, you might not recognize a few of these concepts. In that case, please have a look at the official documentation. It’s incredibly well written, and takes you through the basics you need to know.

We’re going to call our custom hook useValidation for now. Its usage might look something like this:

const config = {
  fields: {
    username: {
      isRequired: { message: 'Please fill out a username' },
    },
    password: {
      isRequired: { message: 'Please fill out a password' },
      isMinLength: { value: 6, message: 'Please make it more secure' }
    }
  },
  onSubmit: e => { /* handle submit */ }
};
const { getFieldProps, getFormProps, errors } = useValidation(config);

The config object accepts a fields prop, which sets up the validation rules for each field. In addition, it accepts a callback for when the form submits.

The fields object contains a key for each field we want to validate. Each field has its own config, where each key is a validator name, and each value is a configuration property for that validator. Another way of writing the same would be:

{
  fields: {
    fieldName: {
      oneValidator: { validatorRule: 'validator value' },
      anotherValidator: { errorMessage: 'something is not as it should' }
    }
  }
}

Our useValidation hook will return an object with a few properties — getFieldProps, getFormProps and errors. The two first functions are what Kent C. Dodds calls “prop getters” (see here for a great article on those), and is used to get the relevant props for a given form field or form tag. The errors prop is an object with any error messages, keyed per field.

This usage would look like this:

const config = { ... }; // like above
const LoginForm = props => {
  const { getFieldProps, getFormProps, errors } = useValidation(config);
  return (
    <form {...getFormProps()}>
      <label>
        Username<br/>
        <input {...getFieldProps('username')} />
        {errors.username && 
{errors.username}
} </label> <label> Password<br/> <input {...getFieldProps('password')} /> {errors.password &&
{errors.password}
} </label> <button type="submit">Submit my form</button> </form> ); };

Alrighty! So we’ve nailed the API.

Note that we’ve created a mock implementation of the useValidation hook as well. For now, it’s just returning an object with the objects and functions we require to be there, so we don’t break our sample implementation.

Storing The Form State 💾

The first thing we need to do is storing all of the form state in our custom hook. We need to remember the values of each field, any error messages and whether or not the form has been submitted. We’ll use the useReducer hook for this since it allows for the most flexibility (and less boilerplate). If you’ve ever used Redux, you’ll see some familiar concepts — and if not, we’ll explain as we go along! We’ll start off by writing a reducer, which is passed to the useReducer hook:

const initialState = {
  values: {},
  errors: {},
  submitted: false,
};

function validationReducer(state, action) {
  switch(action.type) {
    case 'change': 
      const values = { ...state.values, ...action.payload };
      return { 
        ...state, 
        values,
      };
    case 'submit': 
      return { ...state, submitted: true };
    default: 
      throw new Error('Unknown action type');
  }
}

What’s A Reducer? 🤔

A reducer is a function that accepts an object of values and an “action” and returns an augmented version of the values object.

Actions are plain JavaScript objects with a type property. We’re using a switch statement to handle each possible action type.

The “object of values” is often referred to as state, and in our case, it’s the state of our validation logic.

Our state consists of three pieces of data — values (the current values of our form fields), errors (the current set of error messages) and a flag isSubmitted indicating whether or not our form has been submitted at least once.

In order to store our form state, we need to implement a few parts of our useValidation hook. When we call our getFieldProps method, we need to return an object with the value of that field, a change-handler for when it changes, and a name prop to track which field is which.

function validationReducer(state, action) {
  // Like above
}

const initialState = { /* like above */ };

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);
  
  return {
    errors: state.errors,
    getFormProps: e => {},
    getFieldProps: fieldName => ({
      onChange: e => {
        if (!config.fields[fieldName]) {
          return;
        }
        dispatch({ 
          type: 'change', 
          payload: { [fieldName]: e.target.value } 
        });
      },
      name: fieldName,
      value: state.values[fieldName],
    }),
  };
};

The getFieldProps method now returns the props required for each field. When a change event is fired, we ensure that field is in our validation configuration, and then tell our reducer a change action took place. The reducer will handle the changes to the validation state.

Validating Our Form 📄

Our form validation library is looking good, but isn’t doing much in terms of validating our form values! Let’s fix that. 💪

We’re going to validate all fields on every change event. This might not sound very efficient, but in the real world applications I’ve come across, it isn’t really an issue.

Note, we’re not saying you have to show every error on every change. We’ll revisit how to show errors only when you submit or navigates away from a field, later in this article.

How To Pick Validator Functions

When it comes to validators, there are tons of libraries out there that implement all the validation methods you’d ever need. You can also write your own if you want. It’s a fun exercise!

For this project, we’re going to use a set of validators I wrote some time ago — calidators. These validators have the following API:

function isRequired(config) {
  return function(value) {
    if (value === '') {
      return config.message;
    } else {
      return null;
    }
  };
}

// or the same, but terser

const isRequired = config => value => 
    value === '' ? config.message : null;

In other words, each validator accepts a configuration object and returns a fully-configured validator. When that function is called with a value, it returns the message prop if the value is invalid, or null if it’s valid. You can look at how some of these validators are implemented by looking at the source code.

To access these validators, install the calidators package with npm install calidators.

Validate a single field

Remember the config we pass to our useValidation object? It looks like this:

{ 
  fields: {
    username: {
      isRequired: { message: 'Please fill out a username' },
    },
    password: {
      isRequired: { message: 'Please fill out a password' },
      isMinLength: { value: 6, message: 'Please make it more secure' }
    }
  },
  // more stuff
}

To simplify our implementation, let’s assume we only have a single field to validate. We’ll go through each key of the field’s configuration object, and run the validators one by one until we either find an error or are done validating.

import * as validators from 'calidators';

function validateField(fieldValue = '', fieldConfig) {
  for (let validatorName in fieldConfig) {
    const validatorConfig = fieldConfig[validatorName];
    const validator = validators[validatorName];
    const configuredValidator = validator(validatorConfig);
    const errorMessage = configuredValidator(fieldValue);

    if (errorMessage) {
      return errorMessage;
    }
  }
  return null;
}

Here, we’ve written a function validateField, which accepts the value to validate and the validator configs for that field. We loop through all of the validators, pass them the config for that validator, and run it. If we get an error message, we skip the rest of the validators and return. If not, we try the next validator.

Note: On validator APIs

If you choose different validators with different APIs (like the very popular validator.js), this part of your code might look a bit different. For brevity’s sake, however, we let that part be an exercise left to the reader.

Note: On for…in loops

Never used for...in loops before? That’s fine, this was my first time too! Basically, it iterates over the keys in an object. You can read more about them at MDN.

Validate all the fields

Now that we’ve validated one field, we should be able to validate all fields without too much trouble.

function validateField(fieldValue = '', fieldConfig) {
  // as before
}

function validateFields(fieldValues, fieldConfigs) {
  const errors = {};
  for (let fieldName in fieldConfigs) {
    const fieldConfig = fieldConfigs[fieldName];
    const fieldValue = fieldValues[fieldName];

    errors[fieldName] = validateField(fieldValue, fieldConfig);
  }
  return errors;
}

We’ve written a function validateFields that accepts all field values and the entire field config. We loop through each field name in the config and validate that field with its config object and value.

Next: Tell our reducer

Alrighty, so now we have this function that validates all of our stuff. Let’s pull it into the rest of our code!

First, we’re going to add a validate action handler to our validationReducer.

function validationReducer(state, action) {
  switch (action.type) {
    case 'change':
      // as before
    case 'submit':
      // as before
    case 'validate': 
      return { ...state, errors: action.payload };
    default:
      throw new Error('Unknown action type');
  }
}

Whenever we trigger the validate action, we replace the errors in our state with whatever was passed alongside the action.

Next up, we’re going to trigger our validation logic from a useEffect hook:

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);

  useEffect(() => {
    const errors = validateFields(state.fields, config.fields);
    dispatch({ type: 'validate', payload: errors });
  }, [state.fields, config.fields]);
  
  return {
    // as before
  };
};

This useEffect hook runs whenever either our state.fields or config.fields changes, in addition to on first mount.

Beware Of Bug 🐛

There’s a super subtle bug in the code above. We’ve specified that our useEffect hook should only re-run whenever the state.fields or config.fields change. Turns out, “change” doesn’t necessarily mean a change in value! useEffect uses Object.is to ensure equality between objects, which in turn uses reference equality. That is — if you pass a new object with the same content, it won’t be the same (since the object itself is new).

The state.fields are returned from useReducer, which guarantees us this reference equality, but our config is specified inline in our function component. That means the object is re-created on every render, which in turn will trigger the useEffect above!

To solve this, we need to use for the use-deep-compare-effect library by Kent C. Dodds. You install it with npm install use-deep-compare-effect, and replace your useEffect call with this instead. This makes sure we do a deep equality check instead of a reference equality check.

Your code will now look like this:

import useDeepCompareEffect from 'use-deep-compare-effect';

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);

  useDeepCompareEffect(() => {
    const errors = validateFields(state.fields, config.fields);
    dispatch({ type: 'validate', payload: errors });
  }, [state.fields, config.fields]);
  
  return {
    // as before
  };
};
A Note On useEffect

Turns out, useEffect is a pretty interesting function. Dan Abramov wrote a really nice, long article on the intricacies of useEffect if you’re interested in learning all there is about this hook.

Now things are starting to look like a validation library!

Handling Form Submission

The final piece of our basic form validation library is handling what happens when we submit the form. Right now, it reloads the page, and nothing happens. That’s not optimal. We want to prevent the default browser behavior when it comes to forms, and handle it ourselves instead. We place this logic inside the getFormProps prop getter function:

const useValidation = config => {
  const [state, dispatch] = useReducer(validationReducer, initialState);
  // as before
  return {
    getFormProps: () => ({
      onSubmit: e => {
        e.preventDefault();
        dispatch({ type: 'submit' });
        if (config.onSubmit) {
          config.onSubmit(state);
        }
      },
    }),
    // as before
  };
};

We change our getFormProps function to return an onSubmit function, that is triggered whenever the submit DOM event is triggered. We prevent the default browser behavior, dispatch an action to tell our reducer we submitted, and call the provided onSubmit callback with the entire state — if it’s provided.

Summary

We’re there! We’ve created a simple, usable and pretty cool validation library. There’s still tons of work to do before we can dominate the interwebs, though.

Stay tuned for Part 2 next week!

Smashing Editorial
(dm, il)

Source: Smashing Magazine, Creating Your Own React Validation Library: The Basics (Part 1)