Collective #583

Inspirational Website of the Week: basement.studio Everything on basement is bold: from its striking typography to the brilliant effects and playful interactivity. Fantastic work. Get inspired This content is sponsored via Thought Leaders Clubhouse: Make 2020 your most productive year Clubhouse is a Read more

Collective #561

dreamt up by webguru in Uncategorized | Comments Off on Collective #561







C561_overview

Overview

A beautiful project: Overview uses satellite and aerial imagery to demonstrate how human activity and natural forces shape our Earth.

Check it out















C561_todo

CLARO

A minimal to-do app without distractions, just your workweek with things to do at a glance. Free for one year if you invite friends.

Check it out


Collective #561 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #561

Writing Asynchronous Tasks In Modern JavaScript

dreamt up by webguru in Uncategorized | Comments Off on Writing Asynchronous Tasks In Modern JavaScript

Writing Asynchronous Tasks In Modern JavaScript

Writing Asynchronous Tasks In Modern JavaScript

Jeremias Menichelli



JavaScript has two main characteristics as a programming language, both important to understand how our code will work. First is its synchronous nature, which means the code will run line after line, almost as you read it, and secondly that it is single-threaded, only one command is being executed at any time.

As the language evolved, new artifacts appeared in the scene to allow asynchronous execution; developers tried different approaches while solving more complicated algorithms and data flows, which led to the emergence of new interfaces and patterns around them.

Synchronous Execution And The Observer Pattern

As mentioned in the introduction, JavaScript runs the code you write line by line, most of the time. Even in its first years, the language had exceptions to this rule, though they were a few and you might know them already: HTTP Requests, DOM events and time intervals.

If we add an event listener to respond to the click of an element from the user, it doesn’t matter what the language interpreter is running it will stop, run the code we wrote in the listener callback and then go back to its normal flow.

Same with an interval or a network request, addEventListener, setTimeout, and XMLHttpRequest were the first artifacts to access to asynchronous execution for web developers.

Though these were exceptions of synchronous execution in JavaScript, it’s crucial to understand that the language is still single-threaded. We can break this synchronicity but the interpreter still will run one line of code at a time.

For example, let’s check out a network request.

var request = new XMLHttpRequest();
request.open('GET', '//some.api.at/server', true);

// observe for server response
request.onreadystatechange = function() {
  if (request.readyState === 4 && xhr.status === 200) {
    console.log(request.responseText);
  }
}

request.send();

No matter what is happening, by the time the server comes back, the method assigned to onreadystatechange gets called before taking back the program’s code sequence.

Something similar happens when reacting to user interaction.

const button = document.querySelector('button');

// observe for user interaction
button.addEventListener('click', function(e) {
  console.log('user click just happened!');
})

You might notice that we are hooking up to an external event and passing a callback, telling the code what to do when it takes place. Over a decade ago, “What is a callback?” was a pretty much-expected interview question because this pattern was everywhere in most codebases.

In each case mentioned, we are responding to an external event. A certain interval of time reached, a user action or a server response. We weren’t able to create an asynchronous task per se, we always observed occurrences happening outside of our reach.

This is why code shaped this way is called the Observer Pattern, which is better represented by the addEventListener interface in this case. Soon event emitters libraries or frameworks exposing this pattern flourished.

Node.js And Event Emitters

A good example is Node.js which page describes itself as “an asynchronous event-driven JavaScript runtime”, so event emitters and callback were first-class citizens. It even had an EventEmitter constructor already implemented.

const EventEmitter = require('events');
const emitter = new EventEmitter();

// respond to events
emitter.on('gretting', (message) => console.log(message));

// send events
emitter.emit('gretting', 'Hi there!');

This was not only the to-go approach for asynchronous execution but a core pattern and convention of its ecosystem. Node.js opened a new era of writing JavaScript in a different environment — even outside the web. As a consequence, other asynchronous situations were possible, like creating new directories or writing files.

const { mkdir, writeFile } = require('fs');

const styles = 'body { background: #ffdead; }';

mkdir('./assets/', (error) => {
  if (!error) {
    writeFile('assets/main.css', styles, 'utf-8', (error) => {
      if (!error) console.log('stylesheet created');
    })
  }
})

You might notice that callbacks receive an error as a first argument, if a response data is expected, it goes as a second argument. This was called Error-first Callback Pattern, which became a convention that authors and contributors adopted for their own packages and libraries.

Promises And The Endless Callback Chain

As web development faced more complex problems to solve, the need for better asynchronous artifacts appeared. If we look at the last code snippet, we can see a repeated callback chaining which doesn’t scale well as the number tasks increase.

For example, let’s add only two more steps, file reading and styles preprocessing.

const { mkdir, writeFile, readFile } = require('fs');
const less = require('less')

readFile('./main.less', 'utf-8', (error, data) => {
  if (error) throw error
  less.render(data, (lessError, output) => {
    if (lessError) throw lessError
    mkdir('./assets/', (dirError) => {
      if (dirError) throw dirError
      writeFile('assets/main.css', output.css, 'utf-8', (writeError) => {
        if (writeError) throw writeError
        console.log('stylesheet created');
      })
    })
  })
})

We can see how as the program we are writing gets more complex the code becomes harder to follow for the human eye due to multiple callback chaining and repeated error handling.

Promises, Wrappers And Chain Patterns

Promises didn’t receive much attention when they were first announced as the new addition to the JavaScript language, they aren’t a new concept as other languages had similar implementations decades before. Truth is, they turned out to change a lot the semantics and structure of most of the projects I worked on since its appearance.

Promises not only introduced a built-in solution for developers to write asynchronous code but also opened a new stage in web development serving as the construction base of later new features of the web spec like fetch.

Migrating a method from a callback approach to a promise-based one became more and more usual in projects (such as libraries and browsers), and even Node.js started slowly migrating to them.

Let’s, for example, wrap Node’s readFile method:

const { readFile } = require('fs');

const asyncReadFile = (path, options) => {
  return new Promise((resolve, reject) => {
    readFile(path, options, (error, data) => {
      if (error) reject(error);
      else resolve(data);
    })
  });
}

Here we obscure the callback by executing inside a Promise constructor, calling resolve when the method result is successful, and reject when the error object is defined.

When a method returns a Promise object we can follow its successful resolution by passing a function to then, its argument is the value which the promise was resolved, in this case, data.

If an error was thrown during the method the catch function will be called, if present.

Note: If you need to understand more in-depth how Promises work, I recommend Jake Archibald’s “JavaScript Promises: An Introduction” article which he wrote on Google’s web development blog.

Now we can use these new methods and avoid callback chains.

asyncRead('./main.less', 'utf-8')
  .then(data => console.log('file content', data))
  .catch(error => console.error('something went wrong', error))

Having a native way to create asynchronous tasks and a clear interface to follow up its possible results enabled the industry to move out of the Observer Pattern. Promise-based ones seemed to solve the unreadable and prone-to-error code.

As a better syntax highlighting or clearer error messages help while coding, a code that is easier to reason becomes more predictable for the developer reading it, with a better picture of the execution path the easier to catch a possible pitfall.

Promises adoption was so global in the community that Node.js rapidly release built-in versions of its I/O methods to return Promise objects like importing them file operations from fs.promises.

It even provided a promisify util to wrap any function which followed the Error-first Callback Pattern and transform it into a Promise-based one.

But do Promises help in all cases?

Let’s re-imagine our style preprocessing task written with Promises.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

readFile('./main.less', 'utf-8')
  .then(less.render)
  .then(result =>
    mkdir('./assets')
      .then(writeFile('assets/main.css', result.css, 'utf-8'))
  )
  .catch(error => console.error(error))

There is a clear reduction of redundancy in the code, especially around the error handling as we now rely on catch, but Promises somehow failed to deliver a clear code indentation that directly relates to the concatenation of actions.

This is actually achieved on the first then statement after readFile is called. What happens after these lines is the need to create a new scope where we can first make the directory, to later write the result in a file. This causes a break into the indentation rhythm, not making it easy to determinate the instructions sequence at first glance.

A way to solve this is to pre-baked a custom method that handles this and allows the correct concatenation of the method, but we would be introducing one more depth of complexity to a code that already seems to have what it needs to achieve the task we want.

Note: Take in count this is an example program, and we are in control around some of the methods and they all follow an industry convention, but that’s not always the case. With more complex concatenations or the introduction of a library with a different shape, our code style can easily break.

Gladly, the JavaScript community learned again from other language syntaxes and added a notation that helps a lot around these cases where asynchronous tasks concatenation is not as pleasant or straight-forward to read as synchronous code is.

Async And Await

A Promise is defined as an unresolved value at execution time, and creating an instance of a Promise is an explicit call of this artifact.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

readFile('./main.less', 'utf-8')
  .then(less.render)
  .then(result =>
    mkdir('./assets')
      .then(writeFile('assets/main.css', result.css, 'utf-8'))
  )
  .catch(error => console.error(error))

Inside an async method, we can use the await reserved word to determinate the resolution of a Promise before continuing its execution.

Let’s revisit or code snippet using this syntax.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

async function processLess() {
  const content = await readFile('./main.less', 'utf-8')
  const result = await less.render(content)
  await mkdir('./assets')
  await writeFile('assets/main.css', result.css, 'utf-8')
}

processLess()

Note: Notice that we needed to move all our code to a method because we can’t use await outside the scope of an async function today.

Every time an async method finds an await statement, it will stop executing until the proceeding value or promise gets resolved.

There’s a clear consequence of using async/await notation, despite its asynchronous execution, the code looks as if it was synchronous, which is something we developers are more used to see and reason around.

What about error handling? For it, we use statements that have been present for a long time in the language, try and catch.

const { mkdir, writeFile, readFile } = require('fs').promises;
const less = require('less')

async function processLess() {
  const content = await readFile('./main.less', 'utf-8')
  const result = await less.render(content)
  await mkdir('./assets')
  await writeFile('assets/main.css', result.css, 'utf-8')
}

try {
  processLess()
} catch (e) {
  console.error(e)
}

We rest assured any error thrown in the process will be handled by the code inside the catch statement. We have a centric place that takes care of error handling, but now we have a code that is easier to read and follow.

Having consequent actions that returned value doesn’t need to be stored in variables like mkdir that don’t break the code rhythm; there’s also no need to create a new scope to access the value of result in a later step.

It’s safe to say Promises were a fundamental artifact introduced in the language, necessary to enable async/await notation in JavaScript, which you can use on both modern browsers and latest versions of Node.js.

Note: Recently in JSConf, Ryan Dahl, creator and first contributor of Node, regretted not sticking to Promises on its early development mostly because the goal of Node was to create event-driven servers and file management which the Observer pattern served better for.

Conclusion

The introduction of Promises into the web development world came to change the way we queue actions in our code and changed how we reason about our code execution and how we author libraries and packages.

But moving away from chains of callback is harder to solve, I think that having to pass a method to then didn’t help us to move away from the train of thought after years of being accustomed to the Observer Pattern and approaches adopted by major vendors in the community like Node.js.

As Nolan Lawson says in his excellent article about wrong uses in Promise concatenations, old callback habits die hard! He later explains how to escape some of these pitfalls.

I believe Promises were needed as a middle step to allow a natural way to generate asynchronous tasks but didn’t help us much to move forward on better code patterns, sometimes you actually need a more adaptable and improved language syntax.


As we try to solve more complex puzzles using JavaScript, we see the need for a more mature language and we experiment with architectures and patterns we weren’t used to seeing on the web before.

We still don’t know how the ECMAScript spec will look in years as we are always extending the JavaScript governance outside the web and try to solve more complicated puzzles.

It’s hard to say now what exactly we will need from the language for some of these puzzles to turn into simpler programs, but I’m happy with how the web and JavaScript itself are moving things, trying to adapt to challenges and new environments. I feel right now JavaScript is a more asynchronous friendly place than when I started writing code in a browser over a decade ago.

Further Reading

Smashing Editorial
(dm, il)

Source: Smashing Magazine, Writing Asynchronous Tasks In Modern JavaScript

Collective #560

dreamt up by webguru in Uncategorized | Comments Off on Collective #560


divi4-template-areas-FE

Our Sponsor

Divi 4.0 Has Arrived!

Divi 4.0 has arrived and the new fully-featured website templating system allows you to use the Divi Builder to structure your website and edit any part of the Divi Theme including headers, footers, post templates, category templates and more.

Check it out









C560_portfolio

Bruno Simon

A wonderful portfolio site of creative developer Bruno Simon built with Three.js and Cannon.js.

Check it out


C560_regex

RegexGuide

The RegexGuide is a playground helping developers to start writing regular expressions.

Check it out


C560_fuzz

Jsfuzz

Jsfuzz is coverage-guided fuzzer for testing JavaScript/Node.js packages.

Check it out




C560_platform

Using the Platform

Tim Kadlec on the incredible amount of thought and care that go into deciding the future of the web platform.

Read it






Collective #560 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #560

Create A Bookmarking Application With FaunaDB, Netlify And 11ty

dreamt up by webguru in Uncategorized | Comments Off on Create A Bookmarking Application With FaunaDB, Netlify And 11ty

Create A Bookmarking Application With FaunaDB, Netlify And 11ty

Create A Bookmarking Application With FaunaDB, Netlify And 11ty

Bryan Robinson



The JAMstack (JavaScript, APIs and Markup) revolution is in full swing. Static sites are secure, fast, reliable and fun to work on. At the heart of the JAMstack are static site generators (SSGs) that store your data as flat files: Markdown, YAML, JSON, HTML, and so on. Sometimes, managing data this way can be overly complicated. Sometimes, we still need a database.

With that in mind, Netlify — a static site host and FaunaDB — a serverless cloud database — collaborated to make combining both systems easier. 

Why A Bookmarking Site?

The JAMstack is great for many professional uses, but one of my favorite aspects of this set of technology is its low barrier to entry for personal tools and projects.

There are plenty of good products on the market for most applications I could come up with, but none would be exactly set up for me. None would give me full control over my content. None would come without a cost (monetary or informational).

With that in mind, we can create our own mini-services using JAMstack methods. In this case, we’ll be creating a site to store and publish any interesting articles I come across in my daily technology reading.

I spend a lot of time reading articles that have been shared on Twitter. When I like one, I hit the “heart” icon. Then, within a few days, it’s nearly impossible to find with the influx of new favorites. I want to build something as close to the ease of the “heart,” but that I own and control.

How are we going to do that? I’m glad you asked.

Interested in getting the code? You can grab it on Github or just deploy straight to Netlify from that repository! Take a look at the finished product here.

Our Technologies

Hosting And Serverless Functions: Netlify

For hosting and serverless functions, we’ll be utilizing Netlify. As an added bonus, with the new collaboration mentioned above, Netlify’s CLI — “Netlify Dev” — will automatically connect to FaunaDB and store our API keys as environment variables.

Database: FaunaDB

FaunaDB is a “serverless” NoSQL database. We’ll be using it to store our bookmarks data.

Static Site Generator: 11ty

I’m a big believer in HTML. Because of this, the tutorial won’t be using front-end JavaScript to render our bookmarks. Instead, we’ll utilize 11ty as a static site generator. 11ty has built-in data functionality that makes fetching data from an API as easy as writing a couple of short JavaScript functions.

iOS Shortcuts

We’ll need an easy way to post data to our database. In this case, we’ll use iOS’s Shortcuts app. This could be converted to an Android or desktop JavaScript bookmarklet, as well.

Setting Up FaunaDB Via Netlify Dev

Whether you have already signed up for FaunaDB or you need to create a new account, the easiest way to set up a link between FaunaDB and Netlify is via Netlify’s CLI: Netlify Dev. You can find full instructions from FaunaDB here or follow along below.

Netlify Dev running in the final project with our environment variable names showing

Netlify Dev running in the final project with our environment variable names showing (Large preview)

If you don’t already have this installed, you can run the following command in Terminal:

npm install netlify-cli -g

From within your project directory, run through the following commands:

netlify init // This will connect your project to a Netlify project

netlify addons:create fauna // This will install the FaunaDB "addon"

netlify addons:auth fauna // This command will run you through connecting your account or setting up an account

Once this is all connected, you can run netlify dev in your project. This will run any build scripts we set up, but also connect to the Netlify and FaunaDB services and grab any necessary environment variables. Handy!

Creating Our First Data

From here, we’ll log into FaunaDB and create our first data set. We’ll start by creating a new Database called “bookmarks.” Inside a Database, we have Collections, Documents and Indexes.

A screenshot of the FaunaDB console with data

A screenshot of the FaunaDB console with data (Large preview)

A Collection is a categorized group of data. Each piece of data takes the form of a Document. A Document is a “single, changeable record within a FaunaDB database,” according to Fauna’s documentation. You can think of Collections as a traditional database table and a Document as a row.

For our application, we need one Collection, which we’ll call “links.” Each document within the “links” Collection will be a simple JSON object with three properties. To start, we’ll add a new Document that we’ll use to build our first data fetch.

{
  "url": "https://css-irl.info/debugging-css-grid-part-2-what-the-fraction/",
  "pageTitle": "CSS { In Real Life } | Debugging CSS Grid – Part 2: What the Fr(action)?",
  "description": "CSS In Real Life is a blog covering CSS topics and useful snippets on the web’s most beautiful language. Published by Michelle Barker, front end developer at Ordoo and CSS superfan."
}

This creates the basis for the information we’ll need to pull from our bookmarks as well as provides us with our first set of data to pull into our template.

If you’re like me, you want to see the fruits of your labor right away. Let’s get something on the page!

Installing 11ty And Pulling Data Into A Template

Since we want the bookmarks to be rendered in HTML and not fetched by the browser, we’ll need something to do the rendering. There are many great ways of doing it, but for ease and power, I love using the 11ty static site generator.

Since 11ty is a JavaScript static site generator, we can install it via NPM.

npm install --save @11ty/eleventy

From that installation, we can run eleventy or eleventy --serve in our project to get up and running.

Netlify Dev will often detect 11ty as a requirement and run the command for us. To have this work – and make sure we’re ready to deploy, we can also create “serve” and “build” commands in our package.json.

"scripts": {
    "build": "npx eleventy",
    "serve": "npx eleventy --serve"
  }

11ty’s Data Files

Most static site generators have an idea of a “data file” built-in. Usually, these files will be JSON or YAML files that allow you to add extra information to your site.

In 11ty, you can use JSON data files or JavaScript data files. By utilizing a JavaScript file, we can actually make our API calls and return the data directly into a template.

The file will be a JavaScript module. So in order to have anything work, we need to export either our data or a function. In our case, we’ll export a function.

module.exports = async function() {  
    const data = mapBookmarks(await getBookmarks());  

    return data.reverse()  
}

Let’s break that down. We have two functions doing our main work here: mapBookmarks() and getBookmarks()

The getBookmarks() function will go fetch our data from our FaunaDB database and mapBookmarks() will take an array of bookmarks and restructure it to work better for our template.

Let’s dig deeper into getBookmarks().

getBookmarks()

First, we’ll need to install and initialize an instance of the FaunaDB JavaScript driver.

npm install --save faunadb

Now that we’ve installed it, let’s add it to the top of our data file. This code is straight from Fauna’s docs.

// Requires the Fauna module and sets up the query module, which we can use to create custom queries.
const faunadb = require('faunadb'),  
      q = faunadb.query;

// Once required, we need a new instance with our secret
var adminClient = new faunadb.Client({  
   secret: process.env.FAUNADB_SERVER_SECRET  
});

After that, we can create our function. We’ll start by building our first query using built-in methods on the driver. This first bit of code will return the database references we can use to get full data for all of our bookmarked links. We use the Paginate method, as a helper to manage cursor state should we decide to paginate the data before handing it to 11ty. In our case, we’ll just return all the references.

In this example, I’m assuming you installed and connected FaunaDB via the Netlify Dev CLI. Using this process, you get local environment variables of the FaunaDB secrets. If you didn’t install it this way or aren’t running netlify dev in your project, you’ll need a package like dotenv to create the environment variables. You’ll also need to add your environment variables to your Netlify site configuration to make deploys work later.

adminClient.query(q.Paginate(  
       q.Match( // Match the reference below
           q.Ref("indexes/all_links") // Reference to match, in this case, our all_links index  
       )  
   ))  
   .then( response => { ... })

This code will return an array of all of our links in reference form. We can now build a query list to send to our database.

adminClient.query(...)
    .then((response) => {  
        const linkRefs = response.data; // Get just the references for the links from the response 
        const getAllLinksDataQuery = linkRefs.map((ref) => {  
        return q.Get(ref) // Return a Get query based on the reference passed in  
   })  

return adminClient.query(getAllLinksDataQuery).then(ret => {  
    return ret // Return an array of all the links with full data  
       })  
   }).catch(...)

From here, we just need to clean up the data returned. That’s where mapBookmarks() comes in!

mapBookmarks()

In this function, we deal with two aspects of the data.

First, we get a free dateTime in FaunaDB. For any data created, there’s a timestamp (ts) property. It’s not formatted in a way that makes Liquid’s default date filter happy, so let’s fix that.

function mapBookmarks(data) {
    return data.map(bookmark => {
        const dateTime = new Date(bookmark.ts / 1000);
        ...
    })
}

With that out of the way, we can build a new object for our data. In this case, it will have a time property, and we’ll use the Spread operator to destructure our data object to make them all live at one level.

function mapBookmarks(data) {
    return data.map(bookmark => {
        const dateTime = new Date(bookmark.ts / 1000);

        return { time: dateTime, ...bookmark.data }
    })
}

Here’s our data before our function:

{ 
  ref: Ref(Collection("links"), "244778237839802888"),
  ts: 1569697568650000,
  
  data: { 
    url: 'https://sample.com',
    pageTitle: 'Sample title',
    description: 'An escaped description goes here' 
  } 
}

Here’s our data after our function:

{
    time: 1569697568650,
    url: 'https://sample.com',
    pageTitle: 'Sample title'
    description: 'An escaped description goes here'
}

Now, we’ve got well-formatted data that’s ready for our template!

Let’s write a simple template. We’ll loop through our bookmarks and validate that each has a pageTitle and a url so we don’t look silly.

{% for link in bookmarks %} {% if link.url and link.pageTitle %} // confirms there’s both title AND url for safety

{{ link.pageTitle }}

Saved on {{ link.time | date: "%b %d, %Y" }}

{% if link.description != "" %}

{{ link.description }}

{% endif %}
{% endif %} {% endfor %} </div>

We’re now ingesting and displaying data from FaunaDB. Let’s take a moment and think about how nice it is that this renders out pure HTML and there’s no need to fetch data on the client side!

But that’s not really enough to make this a useful app for us. Let’s figure out a better way than adding a bookmark in the FaunaDB console.

Enter Netlify Functions

Netlify’s Functions add-on is one of the easier ways to deploy AWS lambda functions. Since there’s no configuration step, it’s perfect for DIY projects where you just want to write the code.

This function will live at a URL in your project that looks like this: https://myproject.com/.netlify/functions/bookmarks assuming the file we create in our functions folder is bookmarks.js.

Basic Flow

  1. Pass a URL as a query parameter to our function URL.
  2. Use the function to load the URL and scrape the page’s title and description if available.
  3. Format the details for FaunaDB.
  4. Push the details to our FaunaDB Collection.
  5. Rebuild the site.

Requirements

We’ve got a few packages we’ll need as we build this out. We’ll use the netlify-lambda CLI to build our functions locally. request-promise is the package we’ll use for making requests. Cheerio.js is the package we’ll use to scrape specific items from our requested page (think jQuery for Node). And finally, we’ll need FaunaDb (which should already be installed.

npm install --save netlify-lambda request-promise cheerio

Once that’s installed, let’s configure our project to build and serve the functions locally.

We’ll modify our “build” and “serve” scripts in our package.json to look like this:

"scripts": {
    "build": "npx netlify-lambda build lambda --config ./webpack.functions.js && npx eleventy",
    "serve": "npx netlify-lambda build lambda --config ./webpack.functions.js && npx eleventy --serve"
}

Warning: There’s an error with Fauna’s NodeJS driver when compiling with Webpack, which Netlify’s Functions use to build. To get around this, we need to define a configuration file for Webpack. You can save the following code to a newor existingwebpack.config.js.

const webpack = require('webpack');

module.exports = {
  plugins: [ new webpack.DefinePlugin({ "global.GENTLY": false }) ]
};

Once this file exists, when we use the netlify-lambda command, we’ll need to tell it to run from this configuration. This is why our “serve” and “build scripts use the --config value for that command.

Function Housekeeping

In order to keep our main Function file as clean as possible, we’ll create our functions in a separate bookmarks directory and import them into our main Function file.

import { getDetails, saveBookmark } from "./bookmarks/create";

getDetails(url)

The getDetails() function will take a URL, passed in from our exported handler. From there, we’ll reach out to the site at that URL and grab relevant parts of the page to store as data for our bookmark.

We start by requiring the NPM packages we need:

const rp = require('request-promise');  
const cheerio = require('cheerio');

Then, we’ll use the request-promise module to return an HTML string for the requested page and pass that into cheerio to give us a very jQuery-esque interface.

const getDetails = async function(url) {  
    const data = rp(url).then(function(htmlString) {  
        const $ = cheerio.load(htmlString);  
        ...  
}

From here, we need to get the page title and a meta description. To do that, we’ll use selectors like you would in jQuery. 

Note: In this code, we use 'head > title' as the selector to get the title of the page. If you don’t specify this, you may end up getting <title> tags inside of all SVGs on the page, which is less than ideal.

const getDetails = async function(url) {
  const data = rp(url).then(function(htmlString) {
    const $ = cheerio.load(htmlString);
    const title = $('head > title').text(); // Get the text inside the tag  
    const description = $('meta[name="description"]').attr('content'); // Get the text of the content attribute

// Return out the data in the structure we expect  
    return {
      pageTitle: title,
      description: description
    };
  });
  return data //return to our main function  
}

With data in hand, it’s time to send our bookmark off to our Collection in FaunaDB!

saveBookmark(details)

For our save function, we’ll want to pass the details we acquired from getDetails as well as the URL as a singular object. The Spread operator strikes again!

const savedResponse = await saveBookmark({url, ...details});

In our create.js file, we also need to require and setup our FaunaDB driver. This should look very familiar from our 11ty data file.

const faunadb = require('faunadb'),  
      q = faunadb.query;  

const adminClient = new faunadb.Client({  
   secret: process.env.FAUNADB_SERVER_SECRET  
});

Once we’ve got that out of the way, we can code.

First, we need to format our details into a data structure that Fauna is expecting for our query. Fauna expects an object with a data property containing the data we wish to store.

const saveBookmark = async function(details) {  
const data = {  
   data: details  
};

...

}

Then we’ll open a new query to add to our Collection. In this case, we’ll use our query helper and use the Create method. Create() takes two arguments. First is the Collection in which we want to store our data and the second is the data itself.

After we save, we return either success or failure to our handler.

const saveBookmark = async function(details) {  
const data = {  
   data: details  
};

return adminClient.query(q.Create(q.Collection("links"), data))  
   .then((response) => {  
        /* Success! return the response with statusCode 200 */  
        return {  
             statusCode: 200,  
             body: JSON.stringify(response)  
         }  
     }).catch((error) => {  
        /* Error! return the error with statusCode 400 */  
        return  {  
             statusCode: 400,  
             body: JSON.stringify(error)  
         }  
     })  
}

Let’s take a look at the full Function file.

import { getDetails, saveBookmark } from "./bookmarks/create";  
import { rebuildSite } from "./utilities/rebuild"; // For rebuilding the site (more on that in a minute)

exports.handler = async function(event, context) {  
    try {  
        const url = event.queryStringParameters.url; // Grab the URL  

        const details = await getDetails(url); // Get the details of the page  
        const savedResponse = await saveBookmark({url, ...details}); //Save the URL and the details to Fauna  

        if (savedResponse.statusCode === 200) { 
            // If successful, return success and trigger a Netlify build  
            await rebuildSite();  
            return { statusCode: 200, body: savedResponse.body }  
         } else {  
            return savedResponse //or else return the error  
         }  
     } catch (err) {  
        return { statusCode: 500, body: `Error: ${err}` };  
     }  
};

rebuildSite()

The discerning eye will notice that we have one more function imported into our handler: rebuildSite(). This function will use Netlify’s Deploy Hook functionality to rebuild our site from the new data every time we submit a new — successful — bookmark save.

In your site’s settings in Netlify, you can access your Build & Deploy settings and create a new “Build Hook.” Hooks have a name that appears in the Deploy section and an option for a non-master branch to deploy if you so wish. In our case, we’ll name it “new_link” and deploy our master branch.

A visual reference for the Netlify Admin’s build hook setup

A visual reference for the Netlify Admin’s build hook setup (Large preview)

From there, we just need to send a POST request to the URL provided.

We need a way of making requests and since we’ve already installed request-promise, we’ll continue to use that package by requiring it at the top of our file.

const rp = require('request-promise');  

const rebuildSite = async function() {  
    var options = {  
         method: 'POST',  
         uri: 'https://api.netlify.com/build_hooks/5d7fa6175504dfd43377688c',  
         body: {},  
         json: true  
    };  

    const returned = await rp(options).then(function(res) {  
         console.log('Successfully hit webhook', res);  
     }).catch(function(err) {  
         console.log('Error:', err);  
     });  

    return returned  
}
A demo of the Netlify Function setup and the iOS Shortcut setup combined

Setting Up An iOS Shortcut

So, we have a database, a way to display data and a function to add data, but we’re still not very user-friendly.

Netlify provides URLs for our Lambda functions, but they’re not fun to type into a mobile device. We’d also have to pass a URL as a query parameter into it. That’s a LOT of effort. How can we make this as little effort as possible?

A visual reference for the setup for our Shortcut functionality

A visual reference for the setup for our Shortcut functionality (Large preview)

Apple’s Shortcuts app allows the building of custom items to go into your share sheet. Inside these shortcuts, we can send various types of requests of data collected in the share process.

Here’s the step-by-step Shortcut:

  1. Accept any items and store that item in a “text” block.
  2. Pass that text into a “Scripting” block to URL encode (just in case).
  3. Pass that string into a URL block with our Netlify Function’s URL and a query parameter of url.
  4. From “Network” use a “Get contents” block to POST to JSON to our URL.
  5. Optional: From “Scripting” “Show” the contents of the last step (to confirm the data we’re sending).

To access this from the sharing menu, we open up the settings for this Shortcut and toggle on the “Show in Share Sheet” option.

As of iOS13, these share “Actions” are able to be favorited and moved to a high position in the dialog.

We now have a working “app” for sharing bookmarks across multiple platforms!

Go The Extra Mile!

If you’re inspired to try this yourself, there are a lot of other possibilities to add functionality. The joy of the DIY web is that you can make these sorts of applications work for you. Here are a few ideas:

  1. Use a faux “API key” for quick authentication, so other users don’t post to your site (mine uses an API key, so don’t try to post to it!).
  2. Add tag functionality to organize bookmarks.
  3. Add an RSS feed for your site so that others can subscribe.
  4. Send out a weekly roundup email programmatically for links that you’ve added.

Really, the sky is the limit, so start experimenting!

Smashing Editorial
(dm, yk)

Source: Smashing Magazine, Create A Bookmarking Application With FaunaDB, Netlify And 11ty

Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)

dreamt up by webguru in Uncategorized | Comments Off on Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)

Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)

Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)

Fernando Doglio



After some careful consideration and actual implementation of the module, some of the definitions I made during the design phase had to be changed. This should be a familiar scene for anyone who has ever worked with an eager client who dreams about an ideal product but needs to be restraint by the development team.

Once features have been implemented and tested, your team will start noticing that some characteristics might differ from the original plan, and that’s alright. Simply notify, adjust, and go on. So, without further ado, allow me to first explain what has changed from the original plan.

Battle Mechanics

This is probably the biggest change from the original plan. I know I said I was going to go with a D&D-esque implementation in which each PC and NPC involved would get an initiative value and after that, we would run a turn-based combat. It was a nice idea, but implementing it on a REST-based service is a bit complicated since you can’t initiate the communication from the server side, nor maintain status between calls.

So instead, I will take advantage of the simplified mechanics of REST and use that to simplify our battle mechanics. The implemented version will be player-based instead of party-based, and will allow players to attack NPCs (Non-Player Characters). If their attack succeeds, the NPCs will be killed or else they will attack back by either damaging or killing the player.

Whether an attack succeeds or fails will be determined by the type of weapon used and the weaknesses an NPC might have. So basically, if the monster you’re trying to kill is weak against your weapon, it dies. Otherwise, it’ll be unaffected and — most likely — very angry.

Triggers

If you paid close attention to the JSON game definition from my previous article, you might’ve noticed the trigger’s definition found on scene items. A particular one involved updating the game status (statusUpdate). During implementation, I realized having it working as a toggle provided limited freedom. You see, in the way it was implemented (from an idiomatic point of view), you were able to set a status but unsetting it wasn’t an option. So instead, I’ve replaced this trigger effect with two new ones: addStatus and removeStatus. These will allow you to define exactly when these effects can take place — if at all. I feel this is a lot easier to understand and reason about.

This means that the triggers now look like this:

"triggers": [
{
    "action": "pickup",
"effect":{
    "addStatus": "has light",
"target": "game"
    }
},
{
    "action": "drop",
    "effect": {
    "removeStatus": "has light",
    "target": "game"
    }
}
]

When picking up the item, we’re setting up a status, and when dropping it, we’re removing it. This way, having multiple game-level status indicators is completely possible and easy to manage.

The Implementation

With those updates out of the way, we can start covering the actual implementation. From an architectural point of view, nothing changed; we’re still building a REST API that will contain the main game engine’s logic.

The Tech Stack

For this particular project, the modules I’m going to be using are the following:

Module Description
Express.js Obviously, I’ll be using Express to be the base for the entire engine.
Winston Everything in regards to logging will be handled by Winston.
Config Every constant and environment-dependant variable will be handled by the config.js module, which greatly simplifies the task of accessing them.
Mongoose This will be our ORM. I will model all resources using Mongoose Models and use that to interact directly with the database.
uuid We’ll need to generate some unique IDs — this module will help us with that task.

As for other technologies used aside from Node.js, we have MongoDB and Redis. I like to use Mongo due to the lack of schema required. That simple fact allows me to think about my code and the data formats, without having to worry about updating the structure of my tables, schema migrations or conflicting data types.

Regarding Redis, I tend to use it as a support system as much as I can in my projects and this case is no different. I will be using Redis for everything that can be considered volatile information, such as party member numbers, command requests, and other types of data that are small enough and volatile enough to not merit permanent storage.

I’m also going to be using Redis’ key expiration feature to auto manage some aspects of the flow (more on this shortly).

API Definition

Before moving into client-server interaction and data-flow definitions I want to go over the endpoints defined for this API. They aren’t that many, mostly we need to comply with the main features described in Part 1:

Feature Description
Join a game A player will be able to join a game by specifying the game’s ID.
Create a new game A player can also create a new game instance. The engine should return an ID, so that others can use it to join.
Return scene This feature should return the current scene where the party is located. Basically, it’ll return the description, with all of the associated information (possible actions, objects in it, etc.).
Interact with scene This is going to be one of the most complex ones, because it will take a command from the client and perform that action — things like move, push, take, look, read, to name just a few.
Check inventory Although this is a way to interact with the game, it does not directly relate to the scene. So, checking the inventory for each player will be considered a different action.
Register client application The above actions require a valid client to execute them. This endpoint will verify the client application and return a Client ID that will be used for authentication purposes on subsequent requests.

The above list translates into the following list of endpoints:

Verb Endpoint Description
POST /clients Client applications will require to get a Client ID key using this endpoint.
POST /games New game instances are created using this endpoint by the client applications.
POST /games/:id Once the game is created, this endpoint will enable party members to join it and start playing.
GET /games/:id/:playername This endpoint will return the current game state for a particular player.
POST /games/:id/:playername/commands Finally, with this endpoint, the client application will be able to submit commands (in other words, this endpoint will be used to play).

Let me go into a bit more detail about some of the concepts I described in the previous list.

Client Apps

The client applications will need to register into the system to start using it. All endpoints (except for the first one on the list) are secured and will require a valid application key to be sent with the request. In order to obtain that key, client apps need to simply request one. Once provided, they will last for as long as they are used, or will expire after a month of not being used. This behavior is controlled by storing the key in Redis and setting a one-month long TTL to it.

Game Instance

Creating a new game basically means creating a new instance of a particular game. This new instance will contain a copy of all of the scenes and their content. Any modifications done to the game will only affect the party. This way, many groups can play the same game on their own individual way.

Player’s Game State

This is similar to the previous one, but unique to each player. While the game instance holds the game state for the entire party, the player’s game state holds the current status for one particular player. Mainly, this holds inventory, position, current scene and HP (health points).

Player Commands

Once everything is set up and the client application has registered and joined a game, it can start sending commands. The implemented commands in this version of the engine include: move, look, pickup and attack.

  • The move command will allow you to traverse the map. You’ll be able to specify the direction you want to move towards and the engine will let you know the result. If you take a quick glimpse at Part 1, you can see the approach I took to handle maps. (In short, the map is represented as a graph, where each node represents a room or scene and is only connected to other nodes that represent adjacent rooms.)

    The distance between nodes is also present in the representation and coupled with the standard speed a player has; going from room to room might not be as simple as stating your command, but you’ll also have to traverse the distance. In practice, this means that going from one room to the other might require several move commands). The other interesting aspect of this command comes from the fact that this engine is meant to support multiplayer parties, and the party can’t be split (at least not at this time).

    Therefore, the solution for this is similar to a voting system: every party member will send a move command request whenever they want. Once more than half of them have done so, the most requested direction will be used.

  • look is quite different from move. It allows the player to specify a direction, an item or NPC they want to inspect. The key logic behind this command, comes into consideration when you think about status-dependant descriptions.

    For example, let’s say that you enter a new room, but it’s completely dark (you don’t see anything), and you move forward while ignoring it. A few rooms later, you pick up a lit torch from a wall. So now you can go back and re-inspect that dark room. Since you’ve picked up the torch, you now can see inside of it, and be able to interact with any of the items and NPCs you find in there.

    This is achieved by maintaining a game wide and player specific set of status attributes and allowing the game creator to specify several descriptions for our status-dependant elements in the JSON file. Every description is then equipped with a default text and a set of conditional ones, depending on the current status. The latter are optional; the only one that is mandatory is the default value.

    Additionally, this command has a short-hand version for look at room: look around; that is because players will be trying to inspect a room very often, so providing a short-hand (or alias) command that is easier to type makes a lot of sense.

  • The pickup command plays a very important role for the gameplay. This command takes care of adding items into the players inventory or their hands (if they’re free). In order to understand where each item is meant to be stored, their definition has a “destination” property that specifies if it is meant for the inventory or the player’s hands. Anything that is successfully picked up from the scene is then removed from it, updating the game instance’s version of the game.
  • The use command will allow you to affect the environment using items in your inventory. For instance, picking up a key in a room will allow you to use it to open a locked door in another room.
  • There is a special command, one that is not gameplay-related, but instead a helper command meant to obtain particular information, such as the current game ID or the player’s name. This command is called get, and the players can use it to query the game engine. For example: get gameid.
  • Finally, the last command implemented for this version of the engine is the attack command. I already covered this one; basically, you’ll have to specify your target and the weapon you’re attacking it with. That way the system will be able to check the target’s weaknesses and determine the output of your attack.

Client-Engine Interaction

In order to understand how to use the above-listed endpoints, let me show you how any would-be-client can interact with our new API.

Step Description
Register client First things first, the client application needs to request an API key to be able to access all other endpoints. In order to get that key, it needs to register on our platform. The only parameter to provide is the name of the app, that’s all.
Create a game After the API key is obtained, the first thing to do (assuming this is a brand new interaction) is to create a brand new game instance. Think about it this way: the JSON file I created in my last post contains the game’s definition, but we need to create an instance of it just for you and your party (think classes and objects, same deal). You can do with that instance whatever you want, and it will not affect other parties.
Join the game After creating the game, you’ll get a game ID back from the engine. You can then use that game ID to join the instance using your unique username. Unless you join the game, you can’t play, because joining the game will also create a game state instance for you alone. This will be where your inventory, your position and your basic stats are saved in relation to the game you’re playing. You could potentially be playing several games at the same time, and in each one have independent states.
Send commands In other words: play the game. The final step is to start sending commands. The amount of commands available was already covered, and it can be easily extended (more on this in a bit). Everytime you send a command, the game will return the new game state for your client to update your view accordingly.

Let’s Get Our Hands Dirty

I’ve gone over as much design as I can, in the hopes that that information will help you understand the following part, so let’s get into the nuts and bolts of the game engine.

Note: I will not be showing you the full code in this article since it’s quite big and not all of it is interesting. Instead, I’ll show the more relevant parts and link to the full repository in case you want more details.

The Main File

First things first: this is an Express project and it’s based boilerplate code was generated using Express’ own generator, so the app.js file should be familiar to you. I just want to go over two tweaks I like to do on that code to simplify my work.

First, I add the following snippet to automate the inclusion of new route files:

const requireDir = require("require-dir")
const routes = requireDir("./routes")

//...

Object.keys(routes).forEach( (file) => {
    let cnt = routes[file]
    app.use('/' + file, cnt)    
})

It’s quite simple really, but it removes the need to manually require each route files you create in the future. By the way, require-dir is a simple module that takes care of auto-requiring every file inside a folder. That’s it.

The other change I like to do is to tweak my error handler just a little bit. I should really start using something more robust, but for the needs at hand, I feel like this gets the work done:

// error handler
app.use(function(err, req, res, next) {
  // render the error page
  if(typeof err === "string") {
    err = {
      status: 500,
      message: err
    }
  }
  res.status(err.status || 500);
  let errorObj = {
    error: true,
    msg: err.message,
    errCode: err.status || 500
  }
  if(err.trace) {
    errorObj.trace = err.trace
  }

  res.json(errorObj);
});

The above code takes care of the different types of error messages we might have to deal with — either full objects, actual error objects thrown by Javascript or simple error messages without any other context. This code will take it all and format it into a standard format.

Handling Commands

This is another one of those aspects of the engine that had to be easy to extend. In a project like this one, it makes total sense to assume new commands will pop up in the future. If there is something you want to avoid, then that would probably be avoid making changes on the base code when trying to add something new three or four months in the future.

No amount of code comments will make the task of modifying code you haven’t touched (or even thought about) in several months easy, so the priority is to avoid as many changes as possible. Lucky for us, there are a few patterns we can implement to solve this. In particular, I used a mixture of the Command and the Factory patterns.

I basically encapsulated the behavior of each command inside a single class which inherits from a BaseCommand class that contains the generic code to all commands. At the same time, I added a CommandParser module that grabs the string sent by the client and returns the actual command to execute.

The parser is very simple since all implemented commands now have the actual command as to their first word (i.e. “move north”, “pick up knife”, and so on) it’s a simple matter of splitting the string and getting the first part:

const requireDir = require("require-dir")
const validCommands = requireDir('./commands')

class CommandParser {


    constructor(command) {
        this.command = command
    }


    normalizeAction(strAct) {
        strAct = strAct.toLowerCase().split(" ")[0]
        return strAct
    }


    verifyCommand() {
        if(!this.command) return false
        if(!this.command.action) return false
        if(!this.command.context) return false

        let action = this.normalizeAction(this.command.action)

        if(validCommands[action]) {
            return validCommands[action]
        }
        return false
    }

    parse() {
        let validCommand = this.verifyCommand()
        if(validCommand) {
            let cmdObj = new validCommand(this.command)
            return cmdObj
        } else {
            return false
        }
    }
}

Note: I’m using the require-dir module once again to simplify the inclusion of any existing and new command classes. I simply add it to the folder and the entire system is able to pick it up and use it.

With that being said, there are many ways this can be improved; for instance, by being able to add synonym support for our commands would be a great feature (so saying “move north”, “go north” or even “walk north” would mean the same). That is something that we could centralize in this class and affect all commands at the same time.

I won’t go into details on any of the commands because, again, that’s too much code to show here, but you can see in the following route code how I managed to generalize that handling of the existing (and any future) commands:

/**
Interaction with a particular scene
*/
router.post('/:id/:playername/:scene', function(req, res, next) {

    let command = req.body
    command.context = {
        gameId: req.params.id,
        playername: req.params.playername,
    }

    let parser = new CommandParser(command)

    let commandObj = parser.parse() //return the command instance
    if(!commandObj) return next({ //error handling
        status: 400,
          errorCode: config.get("errorCodes.invalidCommand"),
        message: "Unknown command"
    })

    commandObj.run((err, result) => { //execute the command
        if(err) return next(err)

        res.json(result)
    })

})

All commands only require the run method — anything else is extra and meant for internal use.

I encourage you to go and review the entire source code (even download it and play with it if you like!). In the next part of this series, I’ll show you the actual client implemention and interaction of this API.

Closing Thoughts

I may not have covered a lot of my code here, but I still hope that the article was helpful to show you how I tackle projects — even after the initial design phase. I feel like a lot of people try to start coding as their first response to a new idea and that sometimes can end up discouraging to a developer since there is no real plan set nor any goals to achieve — other than having the final product ready (and that is too big of a milestone to tackle from day 1). So again, my hope with these articles is to share a different way to go about working solo (or as part of a small group) on big projects.

I hope you’ve enjoyed the read! Please feel free to leave a comment below with any type of suggestions or recommendations, I’d love to read what you think and if you’re eager to start testing the API with your own client-side code.

See you on the next one!

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Writing A Multiplayer Text Adventure Engine In Node.js: Game Engine Server Design (Part 2)

A Guide To Optimizing Images For Mobile

dreamt up by webguru in Uncategorized | Comments Off on A Guide To Optimizing Images For Mobile

A Guide To Optimizing Images For Mobile

A Guide To Optimizing Images For Mobile

Suzanne Scacca



(This is a sponsored article.) You know how critical it is to build websites that load quickly. All it takes is for a page to load one second too long for it to start losing visitors and sales. Plus, now that Google has made mobile-first indexing the default, you really can’t afford to let any performance optimizations fall by the wayside what with how difficult it can be to get your mobile site as speedy as your desktop.

Google takes many factors into account when ranking a website and visitors take maybe a handful of factors into account when deciding to explore a site. At the intersection of the two is website speed.

It should come as no surprise that images cause a lot of the problems websites have with speed. And while you could always just trim the fat and build more minimally designed and content-centric sites, why compromise?

Images are a powerful force on the web.

Not only can well-chosen images improve the aesthetics of a site, but they also make it easier for your visitors to consume content. Of course, there are the SEO benefits of images, too.

So, today, let’s focus on how you can still design with as many images as you want without slowing down your website. This will require you to update your image optimization strategy and adopt a tool called ImageKit, but it shouldn’t take much work from you to get this new system in place.

The Necessity Of An Image Optimization Strategy For Mobile

According to HTTP Archive:

  • The median size of a desktop website in 2019 is 1939.5 KB.
  • The median size of a mobile website in 2019 is 1745.0 KB.

HTTP Archive desktop and mobile kilobytes

HTTP Archive charts how many kilobytes desktop and mobile websites are, on average. (Source: HTTP Archive) (Large preview)

If we don’t get a handle on this growth, it’s going to be impossible to meet consumer and Google demands when it comes to providing fast websites. That or we’re going to have to get really good at optimizing for speed.

Speaking of speed, let’s see what HTTP Archive has to say about image weight.

HTTP Archive image bytes desktop vs mobile

HTTP Archive plots out how much images weigh on desktop vs mobile websites. (Source: HTTP Archive) (Large preview)

As it stands today:

  • The median size of images on desktop is 980.3 KB out of the total 1939.5 KB.
  • The median size of images on mobile is 891.7 KB out of the total 1745.0 KB.

Bottom line: images add a lot of weight to websites and consume a lot of bandwidth. And although this data shows that the median size of images on mobile is less than their desktop counterparts, the proportion of images-to-website is slightly larger.

That said, if you have the right image optimization strategy in place, this can easily be remedied.

Here is what this strategy should entail:

1. Size Your Images Correctly

There are lots of tedious tasks you’d have to handle without the right automations in place. Like resizing your images.

But you have to do it, right?

Let’s say you use Unsplash to source a number of images for a website you’re working on.

Unsplash photo from Mark Boxx

An example of a photo you’d find on Unsplash, this one comes from Mark Boss. (Source: Unsplash) (Large preview)

Unlike premium stock repositories where you might get to choose what size or file format you download the file in, you don’t get a choice here.

So, you download the image and any others you need. You then have the choice to use the image as is or manually resize it. After looking at the size of the file and the dimensions of the image, you realize it would be a good idea to resize it.

Original dimensions of image from Unsplash

These are the original dimensions of the Unsplash image: 5591×3145 px. (Source: Unsplash) (Large preview)

This particular image exported as a 3.6 MB file and a 5591×3145 px image. That’s way too big for any website.

There’s no reason to upload images larger than 1 MB — and that’s even pushing it. As for dimensions? Well, that depends on the width of your site, but I think somewhere between 1200 and 2000 px should be your max.

You’re going to have to go through this same process whether images come from a stock site or from someone’s DSLR. The point is, no source image is ever going to come out the “right” size for your website, which means resizing has to take place at some point.

What’s more, responsive websites display images in different sizes depending on the device or browser they’re viewed on. And then there are the different use cases — like full-sized image vs. thumbnail or full-sized product photo vs. featured image.

So, there’s more resizing that has to be done even after you’ve gone through the trouble of manually resizing them.

Here’s what you shouldn’t do:

  • Resize images one-by-one on your own. It’s time-consuming and inefficient.
  • Rely on browser resizing to display your images responsively as it can cause issues.

Instead, you can integrate your existing image server (on your web host) or external storage service (like S3) with ImageKit. Or you can use ImageKit’s Media Library to store your files.

ImageKit Media Library upload

This is how easy it is to upload a new file to the ImageKit Media Library. (Source: ImageKit) (Large preview)

As you can see, ImageKit has accepted the upload of this Unsplash photo at its original dimensions and sizes. The same goes for wherever your files originate from.

However, once you integrate your images or image storage with ImageKit, the tool will take control of your image sizing. You can see how that’s done here:

ImageKit image URL endpoints

ImageKit image URL endpoints enable users to more easily control image resizing parameters. (Source: ImageKit) (Large preview)

Let me briefly explain what you’re looking at above:

  • The Image Origin Preference tells ImageKit where images need to be optimized from. In this case, it’s the ImageKit Media Library and they’ll be served over my website.
  • The Old Image URL is a reminder of where our images lived on the server.
  • The New Image URLs explains where your images will be optimized through ImageKit.

The formula is simple enough. You take the original URL for your image and you transform it with the new ImageKit URL.

The ImageKit URL alone will instantly shrink the size of your image files. However, if you want to do some resizing of your image’s dimensions while you’re at it, you can use transformation parameters to do so.

For example, this is the Unsplash photo as seen from the media library of my website. It lives on my own servers, which is why the address shows my own URL:

An Unsplash image without resizing

How a full-sized image from Unsplash might appear if you leave it as is on your server. (Source: Unsplash) (Large preview)

To see what it looks like once ImageKit has transformed it, I swap out my domain name with the endpoint provided by ImageKit. I then add my image resizing parameters (they allow you to do more than just resize, too) and reattach the remainder of the URL that points to my image storage.

This is what happens when I use ImageKit to automatically resize my image to 1000×560 pixels:

ImageKit endpoint image resizing

ImageKit endpoints enable users to define how their images are to be resized like in this example. (Source: ImageKit) (Large preview)

To create this resized image, I transformed the ImageKit URL into the following:

https://imagekit.io/vq1l4ywcv/tr:w-1000,h-560/…

It’s the width (w-) and height (h-) parameters that reduced the file’s dimensions.

Now, as you can see, this isn’t as pixel-perfect as the original image, but that’s because I have quite a bit of compression applied to the file (80%). I’ll cover how that works below.

In the meantime, let’s focus on how great the image still looks as well as the gains we’re about to get in speed.

ImageKit resizing example on original Unsplash photo

This is what can happen after ImageKit users resize their images. (Source: Unsplash) (Large preview)

Previously, this was a 3.6 MB file for the 5591×3145 px image. Now, it’s a 128 KB file for the 1000×560 px image.

To sweeten the deal further, ImageKit makes it easy to resize your images this way using URL-based image transformation. Essentially, it works like this:

  • You save one master image to ImageKit’s media library or your preferred server.
  • ImageKit automatically uses multiple techniques to bring down the image size significantly.
  • You can then use ImageKit’s resizing and cropping parameters to modify each image to cater to different device resolutions and sizes.

When 91mobiles took advantage of this form of image optimization, it saved its website 3.5 TB every month of bandwidth. And they didn’t have to do anything but integrate with the platform. There was no need to move their images to ImageKit or another third-party storage service. It all took place within their legacy infrastructure.

2. Use Faster-loading Image Formats

It’s not just the size of your images that drain storage space and bandwidth. The file types you use have an impact, too.

PNGs, in general, are used for things like logos, images containing text and other super-fine images that have a transparent background. While you can use them to save your photos, they tend to produce the largest sizes. Even when lossless compression is applied, PNGs still remain larger in size than other file types.

GIFs are the animated counterpart of PNGs and use lossless compression as well.

JPGs, on the other hand, are best suited for colorful images and photos. They’re smaller in size and they shrink down with lossy compression. It’s possible to compress JPGs enough to get them to a manageable size, but you have to be careful as lossy compression degrades the overall quality of a file and there’s no turning back once it’s been done.

WebPs have been gaining in popularity since Google introduced them in the early 2010s. According to a Google study, WebPs can be anywhere between 25% and 34% smaller than JPGs. What’s more, you can use both lossy and lossless compression on WebPs to get them down to even smaller sizes.

Something to keep in mind with WebPs is that they’re not universally accepted. As of writing this, WebPs aren’t accepted by iOS devices. However, the latest versions of all other browsers, Google or otherwise, will gladly display them.

As for how ImageKit helps with this, it’s simple really:

Image Kit image format settings

This ImageKit setting puts the responsibility on ImageKit to serve the best file format. (Source: ImageKit) (Large preview)

When this setting is configured, ImageKit automatically determines the best file format to deliver each of your files in. It takes into account what the original image format and content was along with whether or not the visitor’s device supports it.

JPGs, PNGs and GIFs will all be converted into WebPs when possible — say, if the visitor visits from Chrome (which accepts them). If it’s not possible — say, if the visitor visits from Safari (which doesn’t accept them) — ImageKit will convert to the best (i.e. smallest) format with the defined transformations. This might be a PNG or JPG.

Nykaa was able to capitalize on this image optimization strategy from ImageKit. Even though their website had already been designed using a mix of JPGs and PNGs and were stored in a number of places around the web, ImageKit took care of automating the image formats right from the original URLs.

3. Compress Images

Next, we need to talk about image compression. I’ve already referenced this a couple times, but it breaks down to two types:

Lossless

This form of compression is used on PNGs and GIFs. To compress the file, metadata is stripped out. This way, the integrity of the image remains intact, but the file shrinkage isn’t as substantial as you’d get with lossy compression.

Lossy

This form of compression is applied to JPGs and WebPs. To compress the file, some parts of the image are “lost”, which can give certain spots a granier appearance than the original image. In most cases, it’s barely noticeable unless you look closely at a side-by-side of the two images. But to your visitors, the degradation is easy to miss since there’s no original to compare against.

With lossy compression, you get to control what percentage of the file degrades. A safe range would be anything over 70% to 80%. ImageKit, by default, sets its optimization for 80% and it estimates that you can save at least 20% to 25% of your file size just from that. In reality, though, it’s probably more (we’re looking at upwards of 40% like in the Unsplash image example above):

ImageKit lossy compression settings

This ImageKit setting enables its users to decide how much lossy compression they want applied to their JPGs. (Source: ImageKit) (Large preview)

You can change this to whatever default you believe will maintain quality while giving you the image sizes that help your site load quickly.

Whether you use the default or your own optimization setting, remember to switch on the additional compression settings available under the Advanced tab.

ImageKit advanced optimization settings

ImageKit provides Advanced image optimization settings for JPGs and PNGs. (Source: ImageKit) (Large preview)

These three settings, in particular, will enable you to do as much compressing and as safely as possible.

The first setting “Save a Copy”, for instance, keeps your original images on the ImageKit server. That way, you have a copy of the image pre-compression without having to manage the burden of it on your own server.

The second setting “Preserve Image Metadata” enables you to apply lossless compression when feasible.

And the last setting “PNG Image Compression Mode” allows you to decide what level of lossless optimization you want to use on your PNGs: maximum, minimum or none.

When done, you’ll end up with results like this side-by-side comparison:

Comparison between compressed and original JPG from Unsplash

This side-by-side comparison of an Unsplash image from Luke Jeremiah shows a compressed file and an original JPG. (Source: Unsplash) (Large preview)

This is a JPG from Unsplash. Can you tell which is the original and which is the compressed and resized version from ImageKit?

The one on the left with the black trim is:

  • 1500×1005 px
  • 266 KB
  • Compressed at 95%

The one on the right with the white trim is:

  • 5444×3649 px
  • 2.5 MB
  • Original

It’s up to you to decide which of the ImageKit compression and optimization settings you’re most comfortable using and then configure accordingly.

4. Save to and Pull Images from External Server

There are two ways to run images through ImageKit.

The first is by uploading your images directly to its Media Library:

ImageKit Media Library

ImageKit allows users to store their images in its Media Library instead of their own servers. (Source: ImageKit) (Large preview)

The second is by integrating with your website or external storage service. We’ve actually already seen this part of ImageKit. It’s where you get your URL endpoints from so you can define your image parameters:

ImageKit integrations

ImageKit integrates with content management systems, third-party storage and Cloudinary. (Source: ImageKit) (Large preview)

Even with all of the optimizations above, you might still be having a hard time with image storage and maintenance — either because of how they affect your speed or how much storage you have to hold them.

For instance, if you store your images on your server, you’ll eventually be constrained for space (unless you have a monster-sized hosting account).

When you’re building massive e-commerce stores or business websites with thousands or even millions of images and corresponding image sizes, you can’t afford to be hosting those images on your own. Granted, there is a way to serve them more quickly to visitors (which I’ll explain in the next point), but why take on the burden and cost of additional storage if you don’t have to?

5. Add a CDN

A CDN is another essential optimization tool for large repositories of images. Think of it like a second server, only this one caches (copies) your website and serves them through data centers located significantly closer to your visitors around the world.

As a result, the time it takes to send your website and its thousands of product images from New York, New York to Bangladesh, India happens insanely fast.

With ImageKit, you get to enjoy the privilege of serving your images not just through its core processing servers, but through AWS CloudFront CDN (included in all plans) which has over 150 locations worldwide.

Sintra, a client of ImageKit, saw a big leap in performance after moving to ImageKit. With the ImageKit image CDN (which has delivery nodes all around the globe), it saw an 18% drop in page load times.

Wrapping Up

What’s especially nice about ImageKit is that it’s not just a preventative measure against slowdowns caused by images. You can use it to retroactively fix and improve mobile websites and PWAs, even if they already have millions of images on them. What’s more, the performance center makes it easy to keep an eye on your website’s images and identify opportunities for speed enhancements.

Plus, as you can see from the tips above, ImageKit has simplified a lot of the work you’d otherwise have to do, whether you’d handle it manually or configure it through a plugin.

With consumers and Google becoming pickier by the day about how quickly websites load on mobile, this is the kind of image optimization strategy you need. It’ll lighten your load while ensuring that any images added before or after ImageKit are optimized to the fullest. Even better, your clients will reap the benefits of more leads and greater conversions.

Smashing Editorial
(yk)

Source: Smashing Magazine, A Guide To Optimizing Images For Mobile

Collective #559

dreamt up by webguru in Uncategorized | Comments Off on Collective #559

C559_gridsome

Gridsome

Gridsome is a free and open source Vue.js-powered framework for building websites and apps that are fast by default.

Check it out




C559_webwide

Webwide

A free and inclusive discussion community for web designers, developers and makers.

Check it out



C559_TinaCMS

TinaCMS

In case you didn’t stumble upon it yet: Tina is an open-source site editing toolkit for React-based frameworks – Gatsby and Next.js

Check it out








C559_orientation

Image orientation on the web

In this article you will learn about the current status of image orientation on the web, how to correct orientation of images using Node.js, and how browsers will handle this in the future.

Read it



C559_zero

Zero

Zero is a small graphics app that uses JavaScript to replicate the functionality of a GPU and uses the terminal to display its rendered output via nodejs’ stdout.

Check it out


C559_cobeats

CoBeats

CoBeats helps you keep and organize all web things like bookmarks, screenshots, videos, files, images and more.

Check it out









Collective #559 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #559

Great Expectations: Using Story Principles To Anticipate What Your User Expects

dreamt up by webguru in Uncategorized | Comments Off on Great Expectations: Using Story Principles To Anticipate What Your User Expects

Great Expectations: Using Story Principles To Anticipate What Your User Expects

Great Expectations: Using Story Principles To Anticipate What Your User Expects

John Rhea



Whether it’s in a novel, the latest box office smash, or when Uncle Elmer mistook a potted cactus for a stress ball, we all love stories. There are stories we love, stories we hate, and stories we wish we’d never experienced. Most of the good stories share structure and principles that can help us create consistent website experiences. Experiences that speak to user expectations and guide them to engage with our sites in a way that benefits both of us.

In this article, we’ll pull out and discuss just a few examples of how thinking about your users’ stories can increase user engagement and satisfaction. We’ll look at deus ex machina, ensemble stories, consistency, and cognitive dissonance, all of which center on audience expectations and how your site is meeting those expectations or not.

We can define a story as the process of solving a problem. Heroes have an issue, and they set out on a quest to solve it. Sometimes that’s epic and expansive like the Lord of the Rings or Star Wars and sometimes it’s small and intimate such as Driving Miss Daisy or Rear Window. At its core, every story is about heroes who have a problem and what they do to solve it. So too are visits to a website.

The user is the hero, coming to your site because they have a problem. They need to buy a tchotchke, hire an agency or find the video game news they like. Your site can solve that problem and thus play an important role in the user’s story.

Deus Ex Machina

It’s a term meaning “god from the machine” that goes back to Greek plays — even though it’s Latin — when a large, movable scaffolding or “machine” would bring out an actor playing a god. In the context of story, it’s often used to describe something that comes out of nowhere to solve a problem. It’s like Zeus showing up at the end of a play and killing the villain. It’s not satisfying to the audience. They’ve watched the tension grow between the hero and the villain and feel cheated when Zeus releases the dramatic tension without solving that tension. They watched a journey that didn’t matter because the character they loved did not affect the ending.

The danger of deus ex machina is most visible in content marketing. You hook the audience with content that’s interesting and applicable but then bring your product/site/whatever in out of nowhere and drop the mic like you won a rap battle. The audience won’t believe your conclusion because you didn’t journey with them to find the solution.

If, however, the author integrates Zeus into the story from the beginning, Zeus will be part of the story and not a convenient plot device. Your solutions must honor the story that’s come before, the problem and the pain your users have experienced. You can then speak to how your product/site/whatever solves that problem and heals that pain.

State Farm recently launched a “Don’t Mess With My Discount!” campaign:

Kim comes in to talk to a State Farm rep who asks about a Drive Safe and Save discount. First, for the sake of the discount, Kim won’t speed up to make a meeting. Next, she makes herself and her child hold it till they can get home driving the speed limit. Last, in the midst of labor, she won’t let her partner speed up to get them to the hospital. (Don’t mess with a pregnant lady or her discount.) Lastly, it cuts back to Kim and the agent.

State Farm’s branding and their signature red color are strong presences in both bookend scenes with the State Farm representative. By the end, when they give you details about their “Drive Safe and Save” discount you know who State Farm is, how they can help you, and what you need to do to get the discount.


It’s not a funny story that’s a State Farm commercial in disguise, but a State Farm commercial that’s funny.

Throughout the ad, we know State Farm’s motivations and don’t feel duped into liking something whose only goal is to separate us from our money. They set the expectation of this story being an ad in the beginning and support that throughout.

Another Approach

Sometimes putting your name upfront in the piece might feel wrong or too self-serving. Another way to get at this is to acknowledge the user’s struggle, the pain the user or customer already feels. If your site doesn’t acknowledge that struggle, then your product/site/whatever seems detached from their reality, a deus ex machina. But if your content recognizes the struggle they’ve been through and how your site can solve their problem, the pitch for deeper engagement with your site will be a natural progression of the user’s story. It will be the answer they’ve been searching for all along.

Take this testimonial from Bizzabo:

Emily Fullmer, Director of Global Events for Greenbook said, We are now able to focus less on tedious operations, and more on creating a memorable and seamless experience for our attendees.

Bizzabo solved a real world problem for Greenbook. (Large preview)

It shows the user where Greenbook was, i.e. mired in tedious tasks, and how Bizzabo helped them get past tedium to do what Greenbook says they do best: make memorable experiences. Bizzabo doesn’t come out of the woodwork to say “I’m awesome” or solve a problem you never had. They have someone attesting to how Bizzabo solved a real problem that this real customer needed to be fixed. If you’re in the market to solve that problem too, Bizzabo might be the place to look.

Ensemble Stories

Some experiences, like some stories, aren’t about a single person. They’re about multiple people. If the story doesn’t give enough attention to each member, that person won’t seem important or like a necessary part of the story. If that person has a role in the ending, we feel cheated or think it’s a deus ex machina event. If any character is left out of a story, it should change the story. It’s the same way with websites. The user is the story’s hero, but she’s rarely the only character. If we ignore the other characters, they won’t feel needed or be interested in our websites.

Sometimes a decision involves multiple people because a single user doesn’t have the authority to decide. For instance, Drupalcon Seattle 2019 has a “Convince Your Boss” page. They showcase the benefits of the conference and provide materials to help you get your boss to agree to send you.

You could also offer a friends-and-family discount that rewards both the sharer and the sharee. (Yes, as of this moment, “sharee” is now a word.) Dropbox does this with their sharing program. If you share their service with someone else and they create an account, you get additional storage space.

Dropbox offers an additional 250 MB of space for every friend you get to join their service.

You get extra space, you get extra space, and you get extra space (when you invite a friend). (Large preview)

But you don’t have to be that explicit about targeting other audiences than the user themselves. In social networks and communities, the audience is both the user and their friends. The site won’t reach a critical mass if you don’t appeal to both. I believe Facebook beat MySpace early on by focusing on the connection between users and thus serving both the user and their friends. MySpace focused on individual expression. To put it another way, Facebook included the user’s friends in their audience while MySpace didn’t.

Serving Diametrically Opposed Heros

Many sites that run on ad revenue also have to think about multiple audiences, both the users they serve and the advertisers who want to reach those users. They are equally important in the story, even if their goals are sometimes at odds. If you push one of these audiences to the side, they’ll feel like they don’t matter. When all you care about is ad revenue, users will flee because you’re not speaking to their story any longer or giving them a good experience. If advertisers can’t get good access to the user then they won’t want to pay you for ads and revenue drops off.

Just about any small market newspaper website will show you what happens when you focus only on advertisers’ desires. Newspaper revenue streams have gone so low they have to push ads hard to stay alive. Take, for instance, the major newspaper from my home state of Delaware, the News Journal. The page skips and stutters as ad content loads. Click on any story and you’ll find a short article surrounded by block after block after block of ad content. Ads are paying the bills but with this kind of user experience, I fear it won’t be for long.

Let me be clear that advertisers and users do not have to be diametrically opposed, it’s just difficult to find a balance that pleases both. Sites often lean towards one or the other and risk tipping the scales too far either way. Including the desires of both audiences in your decisions will help you keep that precarious balance.

One way to do both is to have ads conform to the essence of your website, meaning the thing that makes your site different i.e. the “killer app” or sine qua non of your website. In this way, you get ads that conform to the reason the users are going to the site. Advertisers have to conform to the ad policy, but, if it really hits on the reason users are going to the site, advertisers should get much greater engagement.

On my own site, 8wordstories.com, ads are allowed, but they’re only allowed an image, eight words of copy, and a two-word call to action. Thus when users go to the site to get pithy stories, eight words in length, the advertisements will similarly be pithy and short.


Advertisers and users do not have to be diametrically opposed, it’s just difficult to find a balance that pleases both.

Consistency

The hero doesn’t train as a medieval knight for the first half of the story and then find herself in space for the second half. That drastic shift can make the audience turn on the story for dashing their expectations. They think you did a bait-and-switch, showing them the medieval story they wanted and then switching to a space story they didn’t want.

If you try to hook users with free pie, but you sell tubas, you will get lots of pie lovers and very few tuba lovers. Worse yet is to have the free pie contingent on buying a tuba. The thing they want comes with a commitment or price tag they don’t. This happens a lot with a free e-book when you have to create an account and fill out a lengthy form. For me, that price has often been too high.

Make sure the way you’re hooking the audience is consistent with what you want them to read, do, or buy. If you sell tubas offer a free tuba lesson or polishing cloth. This’ll ensure they want what you provide and they’ll think of you the next time they need to buy a tuba.

That said, it doesn’t mean you can’t offer free pie, but it shouldn’t get them in the door, it should push them over the edge.

Audible gives you a thirty-day free trial plus an audio book to keep even if you don’t stay past the trial. They’re giving you a taste of the product. When you say, “I want more.” You know where to get it.

While not offering a freebie, Dinnerly (and most of the other bazillion meal kit delivery companies) offers a big discount on your first few orders, encouraging new customers to try them out. This can be an especially good model for products or services that have fixed costs with enticing new customers.

Dinnerly offers a discount on each of your first three meal kits.

Hmmm… maybe they should offer free pie. (Large preview)

Cognitive Dissonance

There’s another danger concerning consistency, but this one’s more subtle. If you’re reading a medieval story and the author says the “trebuchet launched a rock straight and true, like a spaceship into orbit.” It might be an appropriate allusion for a modern audience, but it’s anachronistic in a medieval story, a cognitive dissonance. Something doesn’t quite make sense or goes against what they know to be true. In the same way, websites that break the flow of their content can alienate their audience without even meaning to (such as statistics that seem unbelievable or are so specific anyone could achieve them).

112% of people reading this article are physically attractive.

(Here’s lookin’ at you, reader.)

This article is the number one choice by physicians in Ohio who drive Yugos.

(Among other questions, why would a European car driving, Ohioan Doctor read a web user experience article?)

These “statistics” break the flow of the website because they make the user stop and wonder about the website’s reputability. Any time a user is pulled out of the flow of a website, they must decide whether to continue with the website or go watch cat videos.

Recently, I reviewed proposals for a website build at my day job. The developers listed in the proposal gave me pause. One with the title “Lead Senior Developer” had seven years of experience. That seemed low for a “lead, senior” developer, but possible. The next guy was just a “web developer” but had twenty years of experience. Even if that’s all correct, their juxtaposition made them look ridiculous. That cognitive dissonance pulled me out of the flow of the proposal and made me question the firm’s abilities.

Similarly poor quality photos, pixelated graphics, unrelated images, tpyos, mispelllings, weird bolding and anything else that sticks out potato can cause cognitive dissonance and tank a proposal or website (or article). The more often you break the spell of the site, the harder it will be for clients/users to believe you/your product/site/thing are as good as you say. Those cat videos will win every time because they always meet the “lolz” expectation.

Conclusion

Users have many expectations when they come to your site. Placing your users in the context of a story helps you understand those expectations and their motivations. You’ll see what they want and expect, but also what they need. Once you know their needs, you can meet those needs. And, if you’ll pardon my sense of humor, you can both …live happily ever after.

Smashing Editorial
(cct, ra, yk, il)

Source: Smashing Magazine, Great Expectations: Using Story Principles To Anticipate What Your User Expects

Smashing Monthly Roundup: Community Resources And Favorite Posts

dreamt up by webguru in Uncategorized | Comments Off on Smashing Monthly Roundup: Community Resources And Favorite Posts

Smashing Monthly Roundup: Community Resources And Favorite Posts

Smashing Monthly Roundup: Community Resources And Favorite Posts

The Smashing Editorial



This is the first monthly update that the Smashing team will be publishing, to highlight some of the things we have enjoyed reading over the past month. Many of the included posts are sourced from the most popular links from our Smashing Newsletter. If you don’t get our newsletter yet, then sign up here to receive carefully curated links from the team every two weeks.

SmashingConf News

We’ve just wrapped up our final SmashingConf of the year in New York. Videos of the event will be on their way soon, but we have already published a write-up and all of the video from our Freiburg event held in September. You can find all of those in our post “SmashingConf Freiburg 2019”.

Smashing San FranciscoAlso, we’ve announced the dates for SmashingConf 2020! Would you like to join us in San Francisco, Freiburg, New York, or our new city of Austin? If so, get your tickets now at super early-bird prices, and keep an eye out for line-up announcements very soon.

We publish a new article every day, and so if you’re not subscribed to our RSS feed or follow us on social media, you may miss out on some brilliant articles! Here are some that our readers seemed to enjoy and recommend further:

  • How To Use Breadcrumbs On A PWA” by Suzanne Scacca
    If you’re worried that your PWA is going to be difficult to navigate without some guidance, put breadcrumbs to work. In this article, Suzanne explains just how.
  • Design Systems Are About Relationships” by Ryan DeBeasi
    Design systems can improve usability, but they can also limit creativity or fall out of sync with actual products. Let’s explore how designers and developers can create more robust design systems by building a culture of collaboration.
  • A Guide To New And Experimental CSS DevTools In Firefox” by Victoria Wang
    Ever since releasing Grid Inspector, the Firefox DevTools team has been inspired to build a new suite of tools to solve the problems of the modern web. In this article, Victoria explains seven tools in detail.
  • Editorial Design Patterns With CSS Grid And Named Columns” by Rachel Andrew
    By naming lines when setting up our CSS Grid layouts, we can tap into some interesting and useful features of Grid — features that become even more powerful when we introduce subgrids.

Best Picks From Our Newsletter

We’ll be honest: Every second week, we struggle with keeping the Smashing Newsletter issues at a moderate length — there are just so many talented folks out there working on brilliant projects! So, without wanting to make this monthly update too long either, we’re shining the spotlight on the following projects:

HTML Email

Can I Email…?

We all know and love caniuse.com. Unfortunately, if you wanted to test support for web standards in HTML Email, it wasn’t really easy. Until now. Inspired by the successful concept, Can I Email lets you check support for more than 50 HTML and CSS features in 25 email clients, and since the site only launched last month, more is already in the planning.

(Can I email…? launched by Rémi Parmentier and the team at Tilt Studio

Made for and by the email geeks community, the data that fuels the project is available on GitHub and anyone can contribute to it. A nice detail: the Email Client Support Scoreboard which is included ranks email clients based on how they support the features. A useful little helper for anyone who’s wrangling HTML email.

Email Design Inspiration

Standing out from the flood of emails that reach our inboxes every day is hard, not only for promotional campaigns but also for transactional emails and newsletters. So how about some inspiration from how others manage to spark curiosity and interest to save their emails from ending up in the junk-mail folder as a victim on the quest to inbox zero?

Email Love by Rob Hope

Email Love by Rob Hope

Curated by Rob Hope, Email Love showcases well-crafted emails that you can turn to for fresh ideas — a look inside the code of each email is included, of course. Exciting discoveries guaranteed!

Fonts

Tools To Circumvent Web Font Pitfalls

Web fonts are easy to implement, but they can have a significant impact on a site’s performance, too. To help you speed up the time to first meaningful paint, Peter Müller built Subfont. The command-line tool analyzes your page to generate the most optimal web font subsets and inject them into your page. Subfont currently supports Google fonts as well as local fonts.

Font Style Matcher by Monica Dinculescu

Speaking of web fonts: To prevent flash of unstyled text from causing layout shifts, you might want to consider choosing your fallback font in relation to your web font’s x-heights and widths. The better they match, the less likely your layout will shift once the web font is loaded.

Monica Dinculescu came up with Font Style Matcher to help find just that perfect fallback font. Before you opt for a fallback font, you might also want to check how well it is supported across different operating systems to not run into issues. Three small but mighty tools to circumvent some of the most common web font pitfalls.

A Tiny Guide To Variable Color Fonts

“The tech is new, the adventure is big!” If you look at the experiments which Arthur Reinders Folmer of Typearture did with variable color fonts, this quote truly hits the mark. Arthur uses variable color fonts to create animations that are not only awe-inspiring eye candy but also explore the full potential of the font technology.

Variable color fonts: How do they work?

Variable Color Fonts: How Do They Work?” by Arthur Reinders Folmer

They might allow little customization compared to SVGs, but variable color fonts are easier to implement and they offer a lot of room for creative adventures, too — using input from the microphone, camera, or gyroscope to adjust the variables and animate the illustrations, for example. Sounds exciting? Arthur put together a little guide in which he dives deeper into the tech behind his experiments. A fantastic overview of what’s possible with variable color fonts today.

Performance

Automating Image Compression

The transfer size of requested images has grown by 52% on desktop and 82% on mobile within the last year — with over half of the median page weight accounting for imagery. These are figures that once again make clear how crucial it is that images are optimized before they hit production. Now, wouldn’t it be handy if you could automate the compression step?

Calibre’s new GitHub Action image-actions

Calibre’s new GitHub Action image-actions

Well, the folks at Calibre asked themselves the same question and built a GitHub Action that does exactly that: it automatically optimizes the images in your pull request — without any quality loss thanks to mozjpeg and libvips, so that no image accidentally skips compression. A real timesaver.

Accessibility

Accessibility Support

There are many different ways that assistive technologies interact with browsers and code. Since it’s still not possible to fully automate screen readers and voice control softwares, we are left with having to do manual tests. And that’s where a11ysupport.io comes into play.

a11ysupport - Accessibility Support

Accessibility Support by Michael Fairchild

Originally created by Michael Fairchild, this community-driven website aims to help inform developers about what is accessibility supported. It’s a project that is active and contributions are always welcome, so start testing away!

Button Contrast Checker

Do your buttons have enough contrast? The Button Contrast Checker built by the folks at Aditus helps you find out. Enter your domain and the tool tests if the buttons on the site are compliant with WCAG 2.1.

Button Contrast Checker

Button Contrast CheckerButton Contrast Checker by Aditus

To cater for realistic results, the checker does not only test the default state of the buttons but also takes hover and focus states as well as the adjacent background into account. A nice detail: Each time you scan a page, the results are stored in a unique URL which you can share with your team. A precious little helper.

Learning To Code

Taking Your Coding Skills To The Next Level

CSS animation, Grid, Flexbox… The web is evolving at such a fast pace that there’s always something new to learn. And, well, what better occasion could there be to finally dive into the topic you’ve been wanting to tackle for so long as with a fun little game?

Flexbox Zombies

Flexbox Zombies by Dave Geddes

If you’ve always wanted to conquer deep space — and learn the basics of object animation in CSS along the way — the CSS Animation course by HTML Academy has some exciting tasks for you to solve. To help your CSS Grid skills grow and blossom, there’s Grid Garden where you use CSS to grow a carrot garden.

If zombies are more up your alley, try Flexbox Zombies. It’ll give you the expertise you need to survive the living deads — all thanks to your coding skills! Or try guiding a friendly little frog to its lily pad with Flexbox in Flexbox Froggy to finally get to grips with the Flexbox concept. Another cool Flexbox learning game that shouldn’t be left unmentioned is Flexbox Defense. Last but not least, if you’re struggling with CSS selectors, there’s CSS Diner to teach you how to select elements. Now, who said learning can’t be fun?

How To Write Better JavaScript

JavaScript is one of the most popular programming languages, and even after more than 20 years since it was born, it is constantly evolving. But how can you get better at it?

Practical Ways To Write Better JavaScript

Practical Ways To Write Better JavaScript” by Ryland Goldstein

Ryland Goldstein shares some of the top methods he uses to write better JavaScript — by using TypeScript to improve team communication and make refactoring easier or linting your code and enforcing a style, for example. As Ryland points out, it’s a continuous process, so take things one step at a time, and before you know it, you’ll be a JavaScript ace.

Learn Regex With Crosswords

If you’ve got a sweet spot for riddles and logic puzzles, then Regex Crossword is for you. Regex Crossword is a crossword puzzle game where the clues are defined using regular expressions — who said regex can’t be fun?

Hamlet puzzle

Regex Cross­word by Ole Michelsen and Maria Hagsten

There are different difficulty levels for you to start to cut your teeth on an easy set of crosswords to learn the basics or put your skills to the test as the puzzles get bigger and more complex. A puzzle generator is also included, so if you feel like making up your own puzzles for others to unravel, there’s nothing to hold you back.

And, The Rest!

Tips To Master Your Next Tech Job Interview

The job-hunting process can be intimidating, especially if you’re just about to get your career started. To help you tackle the challenge well, Yangshun Tay put together the Tech Interview Handbook.

Tech Interview Handbook

Tech Interview Handbook by Yangshun Tay

This free resource takes you through the entire process; from working on your resume to negotiating with the employer once the interview has ended, while curated practice questions get you fit for both the technical and behavioral questions that might pop up along the way. A good read, not only for prospective web professionals.

Behind The Scenes Of Design Teams

While many companies are driven by features and technology, over the last years it’s rare to find debates about the importance of design. It’s reflected in publicly announced case studies, design systems, large scale design overhauls, and most recently in dedicated pages for design teams — be it Uber, Google, Spotify, Medium, Dropbox, Slack, Amazon or AirBnB.

Behind The Scenes Of Design Teams

Image credit: Intercom

Recently, Intercom has announced Intercom.Design, a resource dedicated to its design teams, products, processes and public case studies, including internal UI recommendations and expectations from different product and content designer levels. Wonderful sources of inspiration to improve your design team and explore behind the scenes of how products are designed and built. (Thanks for the tip, Kostya Gorskiy!)

Royalty-Free AI-Generated Faces

100,000 photos of faces of different age, gender, and ethnicity. What doesn’t sound like anything groundbreaking, actually is, if the faces don’t exist but are products of artificial intelligence.

Generated Photos

Generated Photos by Generated Media, Inc.

The Generated Photos project did exactly that. With the help of AI, a team of 20 AI and photography professionals generated this impressive number of high-quality faces that you can download and use in your projects for free (for non-commercial purposes). But the plans go even further: the aim is to build an API that enables anyone to use artificial intelligence to generate interesting, diverse faces for their projects, mockups, and presentations — without bothering about copyright and distribution rights questions. Will this be the end of conventional stock photography?

Monochromatic Color Palettes Made Easy

If you’ve ever tried to generate a consistent monochromatic color palette, you know that this can be a boring task. After he once again messed around with infinite copy-paste commands to create a nice palette, Dimitris Raptis decided to change that. His solution: CopyPalette.

CopyPalette

CopyPalette by Dimitris Raptis

CopyPalette lets you create SVG palettes with ease. All you need to do is select a base color, the contrast ratio of the shades, and the number of color variations you’d like to have, and the tool generates a perfectly-balanced color palette that you can copy and paste into your favorite design tool. A true timesaver.

The Art Of Symbols

Since more than 40,000 years, humans have been using symbols to communicate complex ideas. And as designers, we still do so today.

Art of Symbols by Emotive Brand agency

Art of Symbols” by Emotive Brand agency

Art of Symbols, a 100-day project by the design team at Emotive Brand, set out to explore how ancient symbols inform contemporary brand design. After all, a lot of those symbols which are part of our vocabulary as designers today, already existed a long time ago, as early as in rock paintings and engravings even. If you’re curious to learn more about their origins and meanings and are up for some beautiful eye candy, this project will keep you busy for a while.

Smarter Patterns For Designing With AI

The power of artificial intelligence is huge, but with it also come ethical challenges and a lot of responsibility. Responsibility for the user who might be confused and scared by AI if a clear concept is lacking, who might want to choose the amount of AI they interact with, and who need to be protected against harmful practices.

Smarter Patterns by Myplanet

Smarter Patterns by Myplanet

Based on research of how AI is being used and understood today, the software studio Myplanet put together Smarter Patterns, a library to start a discussion about these topics and help designers tackle the challenges of AI in interface design. The resource currently features 28 patterns that empower designers to create meaningful AI experiences.

Instant Offline Access With Dash

If you’re one of those folks who simply cannot sleep on a plane and wished there was just a super-productive way to get some work done instead, you’re probably always on the lookout for tools that’ll get you through those flights even with spotty WiFi. Well, search no more — we’ve stumbled upon a pretty useful one!

Dash for macOS and iOS

Dash for macOS and iOS

In case you haven’t heard of it yet, Dash is a free and open-source API documentation browser that gives your iPad and iPhone instant offline access to 200+ API documentation sets and 100+ cheatsheets. Folks such as Sarah Drasner use it especially on the day before a long trip; all you need to do is download all the docs you need, and you’re all set! You can even generate your own docsets or request docsets to be included. Nifty!

A Collection Of Personal Sites

With the Internet ingrained in our day-to-day lives, what’s the best way to voice your own ideas, thoughts, and feelings? A personal site, of course! And because there are so many of them out there, Andy Bell decided to keep a collection of some so that folks can discover each other’s work and even receive updates from their RSS feeds.

Personal Sites by Andy Bell

Personal Sites by Andy Bell

If you’d like your site to join the collection, you’ll find simple instructions on GitHub that’ll appear in the list once your request has been approved. What a great way to find folks who share your interests and learn new ways of how to develop and design websites!


From Smashing With Love

A month can be a long time to stay on top of things, so please do subscribe to our bi-weekly newsletter if you still haven’t. Each and every issue is written and edited with love and care. No third-party mailings or hidden advertising — promise!

You can also follow us on Twitter, Facebook and LinkedIn — always feel free to reach out and share your projects with us! We love hearing from you!

Keep up the brilliant work, everyone! You’re smashing!

Smashing Editorial
(vf, ra, cm, il)

Source: Smashing Magazine, Smashing Monthly Roundup: Community Resources And Favorite Posts

Collective #558

dreamt up by webguru in Uncategorized | Comments Off on Collective #558



Our Sponsor

WordPress speed test

Test the speed of your WordPress site, including both desktop and mobile versions, and receive actionable recommendations to make it faster.

Check it out












C558_spars

Spars

A general toolkit for creating interactive web experiences containing everything from an asset loader to a frame scheduler, persistent caching and thread pooling. By Tim van Scherpenzeel.

Check it out












Collective #558 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #558