Collective #463

dreamt up by webguru in Uncategorized | Comments Off on Collective #463

C463_Hooks

Introducing Hooks

Hooks are a new React feature proposal that lets you use state and other React features without writing a class.

Read it









C463_initab

Initab

A Chrome extension that will fill a new tab page with a dashboard of useful programming tools and resources.

Check it out







C463_lazybrush

lazy-brush

A library that provides the math required to implement a “lazy brush” for smooth drawing with a mouse, finger or any pointing device.

Check it out



C463_texme

TeXMe

TeXMe is a lightweight JavaScript utility to create self-rendering Markdown and LaTeX documents.

Check it out







Collective #463 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #463

The CSS Working Group At TPAC: What’s New In CSS?

dreamt up by webguru in Uncategorized | Comments Off on The CSS Working Group At TPAC: What’s New In CSS?

The CSS Working Group At TPAC: What’s New In CSS?

The CSS Working Group At TPAC: What’s New In CSS?

Rachel Andrew



Last week, I attended W3C TPAC as well as the CSS Working Group meeting there. Various changes were made to specifications, and discussions had which I feel are of interest to web designers and developers. In this article, I’ll explain a little bit about what happens at TPAC, and show some examples and demos of the things we discussed at TPAC for CSS in particular.

What Is TPAC?

TPAC is the Technical Plenary / Advisory Committee Meetings Week of the W3C. A chance for all of the various working groups that are part of the W3C to get together under one roof. The event is in a different part of the world each year, this year it was held in Lyon, France. At TPAC, Working Groups such as the CSS Working Group have their own meetings, just as we do at other times of the year. However, because we are all in one building, it means that people from other groups can more easily come as observers, and cross-working group interests can be discussed.

Attendees of TPAC are typically members of one or more of the Working Groups, working on W3C technologies. They will either be representatives of a member organization or Invited Experts. As with any other meetings of W3C Working Groups, the minutes of all of the discussions held at TPAC will be openly available, usually as IRC logs scribed during the meetings.

The CSS Working Group

The CSS Working Group meet face-to-face at TPAC and on at least two other occasions during the year; this is in addition to our weekly phone calls. At all of our meetings, the various issues raised on the specifications are discussed, and decisions made. Some issues are kept for face-to-face discussions due to the benefits of being able to have them with the whole group, or just being able to all get around a whiteboard or see a demo on screen.

When an issue is discussed in any meeting (whether face-to-face or teleconference), the relevant GitHub issue is updated with the minutes of the discussion. This means if you have an issue you want to keep track of, you can star it and see when it is updated. The full IRC minutes are also posted to the www-style mailing list.

Here is a selection of the things we discussed that I think will be of most interest to you.

CSS Scrollbars

The CSS Scrollbars specification seeks to give a standard way of styling the size and color of scrollbars. If you have Firefox Nightly, you can test it out. To see the examples below, use Firefox Nightly and enable the flags layout.css.scrollbar-width.enabled and layout.css.scrollbar-color.enabled by visiting http://about:config in Firefox Nightly.

The specification gives us two new properties: scrollbar-width and scrollbar-color. The scrollbar-width property can take a value of auto, thin, none, or length (such as 1em). It looks as if the length value may be removed from the specification. As you can imagine, it would be possible for a web developer to make a very unusable scrollbar by playing with the width, so it may be better to allow the browser to decide the exact width that makes sense but instead to either show thin or thick scrollbars. Firefox has not implemented the length option.

If you use auto as the value, then the browser will use the default scrollbars: thin will give you a thin scrollbar, and none will show no visible scrollbar (but the element will still be scrollable).


A scrolling element with a thin scrollbar
In this example I have set scrollbar-width: thin.(Large preview)

In a browser with support for CSS Scrollbars, you can see this in action in the demo:

See the Pen CSS Scrollbars: scrollbar-width by Rachel Andrew (@rachelandrew) on CodePen.

The scrollbar-color property deals with — as you would expect — scrollbar colors. A scrollbar has two parts which you may wish to color independently:

  • thumb
    The slider that moves up and down as you scroll.
  • track
    The scrollbar background.

The values for the scrollbar-color property are auto, dark, light and <color> <color>.

Using auto as a keyword value will give you the default scrollbar colors for that browser, dark will provide a dark scrollbar, either in the dark mode of that platform or a custom dark mode, light the light mode of the platform or a custom light mode.

To set your own colors, you add two colors as the value that are separated by a space. The first color will be used for the thumb and the second one for the track. You should take care that there is enough contrast between the colors, as otherwise the scrollbar may be difficult to use for some people.


A scrolling element with a purple and white scrollbar
In this example, I have set custom colors for the scrollbar elements. (Large preview)

In a browser with support for CSS Scrollbars, you can see this in the demo:

See the Pen CSS Scrollbars: scrollbar-color by Rachel Andrew (@rachelandrew) on CodePen.

Aspect Ratio Units

We’ve been using the padding hack in CSS to achieve aspect ratio boxes for some time, however, with the advent of Grid Layout and better ways of sizing content, having a real way to do aspect ratios in CSS has become a more pressing need.

There are two issues raised on GitHub which relate to this requirement:

There is now a draft spec in Level 4 of CSS Sizing, and the decision of the meeting was that this needed further discussion on GitHub before any decisions can be made. So, if you are interested in this, or have additional use cases, the CSS Working Group would be interested in your comments on those issues.

The :where() Functional Pseudo-Class

Last year, the CSSWG resolved to add a pseudo-class which acted like :matches() but with zero specificity, thus making it easy to override without needing to artificially inflate the specificity of later elements to override something in a default stylesheet.

The :matches() pseudo-class might be new to you as it is a Level 4 Selector, however, it allows you to specify a group of selectors to apply some CSS, too. For example, you could write:

.foo a:hover,
p a:hover {
  color: green;
}

Or with :matches()

:matches(.foo, p) a:hover {
  color: green;
}

If you have ever had a big stack of selectors just in order to set the same couple of rules, you will see how useful this will be. The following CodePen uses the prefixed names of webkit-any and -moz-any to demonstrate the matches() functionality. You can also read more about matches() on MDN.

See the Pen :matches() and prefixed versions by Rachel Andrew (@rachelandrew) on CodePen.

Where we often do this kind of stacking of selectors, and thus where :matches() will be most useful is in some kind of initial, default stylesheet. However, we then need to be careful when overwriting those defaults that any overwriting is done in a way that will ensure it is more specific than the defaults. It is for this reason that a zero specificity version was proposed.

The issue that was discussed in the meeting was in regard to naming this pseudo-class, you can see the final resolution here, and if you wonder why various names were ruled out take a look at the full thread. Naming things in CSS is very hard — because we are all going to have to live with it forever! After a lot of debate, the group voted and decided to call this selector :where().

Since the meeting, and while I was writing up this post, a suggestion has been raised to rename matches() to is(). Take a look at the issue and comment if you have any strong feelings either way!

Logical Shorthands For Margins And Padding

On the subject of naming things, I’ve written about Logical Properties and Values here on Smashing Magazine in the past, take a look at “Understanding Logical Properties and Values”. These properties and values provide flow relative mappings. This means that if you are using Writing Modes other than a horizontal top to bottom writing mode, such as English, things like margins and padding, widths and height follow the text direction and are not linked to the physical screen dimensions.

For example, for physical margins we have:

  • margin-top
  • margin-right
  • margin-bottom
  • margin-left

The logical mappings for these (assuming horizontal-tb) are:

  • margin-block-start
  • margin-inline-end
  • margin-block-end
  • margin-inline-start

We can have two value shorthands. For example, to set both margin-block-start and margin-block-end as a shorthand, we can use margin-block: 20px 1em. The first value representing the start edge in the block dimension, the second value the end edge in the block dimension.

We hit a problem, however, when we come to the four-value shorthand margin. That property name is used for physical margins — how do we denote the logical four-value version? Various things have been suggested, including a switch at the top of the file:

@mode "logical";

Or, to use a block that looks a little like a media query:

@mode (flow-mode: relative) {

}

Then various suggestions for keyword modifiers, using some punctuation character, or creating a brand new property name:

margin: relative 1em 2em 3em 4em;
margin: 1em 2em 3em 4em !relative;
margin-relative: 1em 2em 3em 4em;
~margin: 1em 2em 3em 4em;

You can read the issue to see the various things that are being considered. Issues discussed were that while the logical version may well end up being generally the default, sometimes you will want things to relate to the screen geometry; we need to be able to have both options in one stylesheet. Having a @mode setting at the top of the CSS could be confusing; it would fail if someone were to copy and paste a chunk of the stylesheet.

My preference is to have some sort of keyword value. That way, if you look at the rule, you can see exactly which mode is being used, even if it does seem slightly inelegant. It is the sort of thing that a preprocessor could deal with for you; if you did indeed want all of your properties and values to use the logical versions.

We didn’t manage to resolve on the issue, so if you do have thoughts on which of these might be best, or can see problems with them that we haven’t described, please comment on the issue on GitHub.

Web Platform Tests Discussion

At the CSS Working Group meeting and then during the unconference style Technical Plenary Day, I was involved in discussing how to get more people involved in writing tests for CSS specifications. The Web Platform Tests project aims to provide tests for all of the web platform. These tests then help browser vendors check whether their browser is correct as to the spec. In the CSS Working Group, the aim is that any normative change to a specification which has reached Candidate Recommendation (CR) status, should be accompanied by a test. This makes sense as once a spec is in CR, we are asking browsers to implement that spec and provide feedback. They need to know if anything in the spec changes so they can update their code.

The problem is that we have very few people writing specs, so for spec writers to have to write all the tests will slow the progress of CSS down. We would love to see other people writing tests, as it is a way to contribute to the web platform and to gain deep knowledge of how specifications work. So we met to think about how we could encourage people to participate in the effort. I’ve written on this subject in the past; if the idea of writing tests for the platform interests you, take a look at my 24 Ways article on “Testing the Web Platform”.

On With The Work!

TPAC has added to my personal to-do list considerably. However, I’ve been able to pick up tips about specification editing, test writing, and to come up with a plan to get the Multi-Column Layout specification — of which I’m the co-editor — back to CR status. As someone who is not a fan of meetings, I’ve come to see how valuable these face-to-face meetings are for the web platform, giving those of us contributing to it a chance to share the knowledge we individually are developing. I feel it is important though to then take that knowledge and share it outside of the group in order to help more people get involved with developing as well as using the platform.

If you are interested in how the CSS Working Group functions, and how new CSS is invented and ends up in browsers, check out my 2017 CSSConf.eu presentation “Where Does CSS Come From?” and the information from fantasai in her posts “An Inside View of the CSS Working Group at W3C”.

Smashing Editorial
(il)

Source: Smashing Magazine, The CSS Working Group At TPAC: What’s New In CSS?

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

dreamt up by webguru in Uncategorized | Comments Off on Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Denis Žoljom



WordPress came a long way from its start as a simple blog writing tool. A long 15 years later it became the number one CMS choice for developers and non-developers alike. WordPress now powers roughly 30% of the top 10 million sites on the web.

Ever since REST API was bundled in the WordPress core, developers can experiment and use it in a decoupled way, i.e. writing the front-end part by using JavaScript frameworks or libraries. At Infinum, we were (and still are) using WordPress in a ‘classic’ way: PHP for the frontend as well as the backend. After a while, we wanted to give the decoupled approach a go. In this article, I’ll share an overview of what it was that we wanted to achieve and what we encountered while trying to implement our goals.

There are several types of projects that can benefit from this approach. For example, simple presentational sites or sites that use WordPress as a backend are the main candidates for the decoupled approach.

In recent years, the industry thankfully started paying more attention to performance. However, being an easy-to-use inclusive and versatile piece of software, WordPress comes with a plethora of options that are not necessarily utilized in each and every project. As a result, website performance can suffer.

Recommended reading: How To Use Heatmaps To Track Clicks On Your WordPress Website

If long website response times keep you up at night, this is a how-to for you. I will cover the basics of creating a decoupled WordPress and some lessons learned, including:

  1. The meaning of a “decoupled WordPress”
  2. Working with the default WordPress REST API
  3. Improving performance with the decoupled JSON approach
  4. Security concerns

So, What Exactly Is A Decoupled WordPress?

When it comes down to how WordPress is programmed, one thing is certain: it doesn’t follow the Model-View-Controller (MVC) design pattern that many developers are familiar with. Because of its history and for being sort of a fork of an old blogging platform called “b2” (more details here), it’s largely written in a procedural way (using function-based code). WordPress core developers used a system of hooks which allowed other developers to modify or extend certain functionalities.

It’s an all-in-one system that is equipped with a working admin interface; it manages database connection, and has a bunch of useful APIs exposed that handle user authentication, routing, and more.

But thanks to the REST API, you can separate the WordPress backend as a sort of model and controller bundled together that handle data manipulation and database interaction, and use REST API Controller to interact with a separate view layer using various API endpoints. In addition to MVC separation, we can (for security reasons or speed improvements) place the JS App on a separate server like in the schema below:


Image depicting decoupled WordPress diagram with PHP and JS part separated
Decoupled WordPress diagram. (Large preview)

Advantages Of Using The Decoupled Approach

One thing why you may want to use this approach for is to ensure a separation of concerns. The frontend and the backend are interacting via endpoints; each can be on its separate server which can be optimized specifically for each respective task, i.e. separately running a PHP app and running a Node.js app.

By separating your frontend from the backend, it’s easier to redesign it in the future, without changing the CMS. Also, front-end developers only need to care about what to do with the data the backend provides them. This lets them get creative and use modern libraries like ReactJS, Vue or Angular to deliver highly dynamic web apps. For example, it’s easier to build a progressive web app when using the aforementioned libraries.

Another advantage is reflected in the website security. Exploiting the website through the backend becomes more difficult since it’s largely hidden from the public.

Recommended reading: WordPress Security As A Process

Shortcomings Of Using The Decoupled Approach

First, having a decoupled WordPress means maintaining two separate instances:

  1. WordPress for the backend;
  2. A separate front-end app, including timely security updates.

Second, some of the front-end libraries do have a steeper learning curve. It will either take a lot of time to learn a new language (if you are only accustomed to HTML and CSS for templating), or will require bringing additional JavaScript experts to the project.

Third, by separating the frontend, you are losing the power of the WYSIWYG editor, and the ‘Live Preview’ button in WordPress doesn’t work either.

Working With WordPress REST API

Before we delve deeper in the code, a couple more things about WordPress REST API. The full power of the REST API in WordPress came with version 4.7 on December 6th, 2016.

What WordPress REST API allows you to do is to interact with your WordPress installation remotely by sending and receiving JSON objects.

Setting Up A Project

Since it comes bundled with latest WordPress installation, we will be working on the Twenty Seventeen theme. I’m working on Varying Vagrant Vagrants, and have set up a test site with an URL http://dev.wordpress.test/. This URL will be used throughout the article. We’ll also import posts from the wordpress.org Theme Review Teams repository so that we have some test data to work with. But first, we will get familiar working with default endpoints, and then we’ll create our own custom endpoint.

Access The Default REST Endpoint

As already mentioned, WordPress comes with several built-in endpoints that you can examine by going to the /wp-json/ route:

http://dev.wordpress.test/wp-json/

Either by putting this URL directly in your browser, or adding it in the postman app, you’ll get out a JSON response from WordPress REST API that looks something like this:

{
    "name": "Test dev site",
    "description": "Just another WordPress site",
    "url": "http://dev.wordpress.test",
    "home": "http://dev.wordpress.test",
    "gmt_offset": "0",
    "timezone_string": "",
    "namespaces": [
        "oembed/1.0",
        "wp/v2"
    ],
    "authentication": [],
    "routes": {
        "/": {
            "namespace": "",
            "methods": [
                "GET"
            ],
            "endpoints": [
                {
                    "methods": [
                        "GET"
                    ],
                    "args": {
                        "context": {
                            "required": false,
                            "default": "view"
                        }
                    }
                }
            ],
            "_links": {
                "self": "http://dev.wordpress.test/wp-json/"
            }
        },
        "/oembed/1.0": {
            "namespace": "oembed/1.0",
            "methods": [
                "GET"
            ],
            "endpoints": [
                {
                    "methods": [
                        "GET"
                    ],
                    "args": {
                        "namespace": {
                            "required": false,
                            "default": "oembed/1.0"
                        },
                        "context": {
                            "required": false,
                            "default": "view"
                        }
                    }
                }
            ],
            "_links": {
                "self": "http://dev.wordpress.test/wp-json/oembed/1.0"
            }
        },
        ...
        "wp/v2": {
        ...

So in order to get all of the posts in our site by using REST, we would need to go to http://dev.wordpress.test/wp-json/wp/v2/posts. Notice that the wp/v2/ marks the reserved core endpoints like posts, pages, media, taxonomies, categories, and so on.

So, how do we add a custom endpoint?

Create A Custom REST Endpoint

Let’s say we want to add a new endpoint or additional field to the existing endpoint. There are several ways we can do that. First, one can be done automatically when creating a custom post type. For instance, we want to create a documentation endpoint. Let’s create a small test plugin. Create a test-documentation folder in the wp-content/plugins folder, and add documentation.php file that looks like this:

<?php
/**
 * Test plugin
 *
 * @since             1.0.0
 * @package           test_plugin
 *
 * @wordpress-plugin
 * Plugin Name:       Test Documentation Plugin
 * Plugin URI:
 * Description:       The test plugin that adds rest functionality
 * Version:           1.0.0
 * Author:            Infinum 
 * Author URI:        https://infinum.co/
 * License:           GPL-2.0+
 * License URI:       http://www.gnu.org/licenses/gpl-2.0.txt
 * Text Domain:       test-plugin
 */

namespace Test_Plugin;

// If this file is called directly, abort.
if ( ! defined( 'WPINC' ) ) {
  die;
}

/**
 * Class that holds all the necessary functionality for the
 * documentation custom post type
 *
 * @since  1.0.0
 */
class Documentation {
  /**
   * The custom post type slug
   *
   * @var string
   *
   * @since 1.0.0
   */
  const PLUGIN_NAME = 'documentation-plugin';

  /**
   * The custom post type slug
   *
   * @var string
   *
   * @since 1.0.0
   */
  const POST_TYPE_SLUG = 'documentation';

  /**
   * The custom taxonomy type slug
   *
   * @var string
   *
   * @since 1.0.0
   */
  const TAXONOMY_SLUG = 'documentation-category';

  /**
   * Register custom post type
   *
   * @since 1.0.0
   */
  public function register_post_type() {
    $args = array(
        'label'              => esc_html( 'Documentation', 'test-plugin' ),
        'public'             => true,
        'menu_position'      => 47,
        'menu_icon'          => 'dashicons-book',
        'supports'           => array( 'title', 'editor', 'revisions', 'thumbnail' ),
        'has_archive'        => false,
        'show_in_rest'       => true,
        'publicly_queryable' => false,
    );

    register_post_type( self::POST_TYPE_SLUG, $args );
  }

  /**
   * Register custom tag taxonomy
   *
   * @since 1.0.0
   */
  public function register_taxonomy() {
    $args = array(
        'hierarchical'          => false,
        'label'                 => esc_html( 'Documentation tags', 'test-plugin' ),
        'show_ui'               => true,
        'show_admin_column'     => true,
        'update_count_callback' => '_update_post_term_count',
        'show_in_rest'          => true,
        'query_var'             => true,
    );

    register_taxonomy( self::TAXONOMY_SLUG, [ self::POST_TYPE_SLUG ], $args );
  }
}

$documentation = new Documentation();

add_action( 'init', [ $documentation, 'register_post_type' ] );
add_action( 'init', [ $documentation, 'register_taxonomy' ] );

By registering the new post type and taxonomy, and setting the show_in_rest argument to true, WordPress automatically created a REST route in the /wp/v2/namespace. You now have http://dev.wordpress.test/wp-json/wp/v2/documentation and http://dev.wordpress.test/wp-json/wp/v2/documentation-category endpoints available. If we add a post in our newly created documentation custom post going to http://dev.wordpress.test/?post_type=documentation, it will give us a response that looks like this:

[
    {
        "id": 4,
        "date": "2018-06-11T19:48:51",
        "date_gmt": "2018-06-11T19:48:51",
        "guid": {
            "rendered": "http://dev.wordpress.test/?post_type=documentation&p=4"
        },
        "modified": "2018-06-11T19:48:51",
        "modified_gmt": "2018-06-11T19:48:51",
        "slug": "test-documentation",
        "status": "publish",
        "type": "documentation",
        "link": "http://dev.wordpress.test/documentation/test-documentation/",
        "title": {
            "rendered": "Test documentation"
        },
        "content": {
            "rendered": "

This is some documentation content

n", "protected": false }, "featured_media": 0, "template": "", "documentation-category": [ 2 ], "_links": { "self": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation/4" } ], "collection": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation" } ], "about": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/types/documentation" } ], "version-history": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation/4/revisions" } ], "wp:attachment": [ { "href": "http://dev.wordpress.test/wp-json/wp/v2/media?parent=4" } ], "wp:term": [ { "taxonomy": "documentation-category", "embeddable": true, "href": "http://dev.wordpress.test/wp-json/wp/v2/documentation-category?post=4" } ], "curies": [ { "name": "wp", "href": "https://api.w.org/{rel}", "templated": true } ] } } ]

This is a great starting point for our single-page application. Another way we can add a custom endpoint is by hooking to the rest_api_init hook and creating an endpoint ourselves. Let’s add a custom-documentation route that is a bit different than the one we registered. Still working in the same plugin, we can add:

/**
 * Create a custom endpoint
 *
 * @since 1.0.0
 */
public function create_custom_documentation_endpoint() {
  register_rest_route(
    self::PLUGIN_NAME . '/v1', '/custom-documentation',
    array(
        'methods'  => 'GET',
        'callback' => [ $this, 'get_custom_documentation' ],
    )
  );
}

/**
 * Create a callback for the custom documentation endpoint
 *
 * @return string                   JSON that indicates success/failure of the update,
 *                                  or JSON that indicates an error occurred.
 * @since 1.0.0
 */
public function get_custom_documentation() {
  /* Some permission checks can be added here. */

  // Return only documentation name and tag name.
  $doc_args = array(
      'post_type'   => self::POST_TYPE_SLUG,
      'post_status' => 'publish',
      'perm'        => 'readable'
  );

  $query = new WP_Query( $doc_args );

  $response = [];
  $counter  = 0;

  // The Loop
  if ( $query->have_posts() ) {
    while ( $query->have_posts() ) {
      $query->the_post();

      $post_id   = get_the_ID();
      $post_tags = get_the_terms( $post_id, self::TAXONOMY_SLUG );

      $response[ $counter ]['title'] = get_the_title();

      foreach ( $post_tags as $tags_key => $tags_value ) {
        $response[ $counter ]['tags'][] = $tags_value->name;
      }
      $counter++;
    }
  } else {
    $response = esc_html__( 'There are no posts.', 'documentation-plugin' );
  }
  /* Restore original Post Data */
  wp_reset_postdata();

  return rest_ensure_response( $response );
}

And hook the create_custom_documentation_endpoint() method to the rest_api_init hook, like so:

add_action( 'rest_api_init', [ $documentation, 'create_custom_documentation_endpoint' ] );

This will add a custom route in the http://dev.wordpress.test/wp-json/documentation-plugin/v1/custom-documentation with the callback returning the response for that route.

[{
  "title": "Another test documentation",
  "tags": ["Another tag"]
}, {
  "title": "Test documentation",
  "tags": ["REST API", "test tag"]
}]

There are a lot of other things you can do with REST API (you can find more details in the REST API handbook).

Work Around Long Response Times When Using The Default REST API

For anyone who has tried to build a decoupled WordPress site, this is not a new thing — REST API is slow.

My team and I first encountered the strange WordPress-lagging REST API on a client site (not decoupled), where we used the custom endpoints to get the list of locations on a Google map, alongside other meta information created using the Advanced Custom Fields Pro plugin. It turned out that the time the first byte (TTFB) — which is used as an indication of the responsiveness of a web server or other network resource — took more than 3 seconds.

After a bit of investigating, we realized the default REST API calls were actually really slow, especially when we “burdened” the site with additional plugins. So, we did a small test. We installed a couple of popular plugins and encountered some interesting results. The postman app gave the load time of 1.97s for 41.9KB of response size. Chrome’s load time was 1.25s (TTFB was 1.25s, content was downloaded in 3.96ms). Just to retrieve a simple list of posts. No taxonomy, no user data, no additional meta fields.

Why did this happen?

It turns out that accessing REST API on the default WordPress will load the entire WordPress core to serve the endpoints, even though it’s not used. Also, the more plugins you add, the worse things get. The default REST controller WP_REST_Controller is a really big class that does a lot more than necessary when building a simple web page. It handles routes registering, permission checks, creating and deleting items, and so on.

There are two common workarounds for this issue:

  1. Intercept the loading of the plugins, and prevent loading them all when you need to serve a simple REST response;
  2. Load only the bare minimum of WordPress and store the data in a transient, from which we then fetch the data using a custom page.

Improving Performance With The Decoupled JSON Approach

When you are working with simple presentation sites, you don’t need all the functionality REST API offers you. Of course, this is where good planning is crucial. You really don’t want to build your site without REST API, and then say in a years time that you’d like to connect to your site, or maybe create a mobile app that needs to use REST API functionality. Do you?

For that reason, we utilized two WordPress features that can help you out when serving simple JSON data out:

  • Transients API for caching,
  • Loading the minimum necessary WordPress using SHORTINIT constant.

Creating A Simple Decoupled Pages Endpoint

Let’s create a small plugin that will demonstrate the effect that we’re talking about. First, add a wp-config-simple.php file in your json-transient plugin that looks like this:

<?php
/**
 * Create simple wp configuration for the routes
 *
 * @since 1.0.0
 * @package json-transient
 */

define( 'SHORTINIT', true );
$parse_uri = explode( 'wp-content', $_SERVER['SCRIPT_FILENAME'] );
require_once filter_var( $parse_uri[0] . 'wp-load.php', FILTER_SANITIZE_STRING );

The define( 'SHORTINIT', true ); will prevent the majority of WordPress core files to be loaded, as can be seen in the wp-settings.php file.

We still may need some of the WordPress functionality, so we can require the file (like wp-load.php) manually. Since wp-load.php sits in the root of our WordPress installation, we will fetch it by getting the path of our file using $_SERVER['SCRIPT_FILENAME'], and then exploding that string by wp-content string. This will return an array with two values:

  1. The root of our installation;
  2. The rest of the file path (which is of no interest to us).

Keep in mind that we’re using the default installation of WordPress, and not a modified one, like for example in the Bedrock boilerplate, which splits the WordPress in a different file organization.

Lastly, we require the wp-load.php file, with a little bit of sanitization, for security.

In our init.php file, we’ll add the following:

<?php
/**
 * Test plugin
 *
 * @since             1.0.0
 * @package           json-transient
 *
 * @wordpress-plugin
 * Plugin Name:       Json Transient
 * Plugin URI:
 * Description:       Proof of concept for caching api like calls
 * Version:           1.0.0
 * Author:            Infinum 
 * Author URI:        https://infinum.co/
 * License:           GPL-2.0+
 * License URI:       http://www.gnu.org/licenses/gpl-2.0.txt
 * Text Domain:       json-transient
 */

namespace Json_Transient;

// If this file is called directly, abort.
if ( ! defined( 'WPINC' ) ) {
  die;
}

class Init {
  /**
   * Get the array of allowed types to do operations on.
   *
   * @return array
   *
   * @since 1.0.0
   */
  public function get_allowed_post_types() {
    return array( 'post', 'page' );
  }

  /**
   * Check if post type is allowed to be save in transient.
   *
   * @param string $post_type Get post type.
   * @return boolean
   *
   * @since 1.0.0
   */
  public function is_post_type_allowed_to_save( $post_type = null ) {
    if( ! $post_type ) {
      return false;
    }

    $allowed_types = $this->get_allowed_post_types();

    if ( in_array( $post_type, $allowed_types, true ) ) {
      return true;
    }

    return false;
  }

  /**
   * Get Page cache name for transient by post slug and type.
   *
   * @param string $post_slug Page Slug to save.
   * @param string $post_type Page Type to save.
   * @return string
   *
   * @since  1.0.0
   */
  public function get_page_cache_name_by_slug( $post_slug = null, $post_type = null ) {
    if( ! $post_slug || ! $post_type ) {
      return false;
    }

    $post_slug = str_replace( '__trashed', '', $post_slug );

    return 'jt_data_' . $post_type . '_' . $post_slug;
  }

    /**
   * Get full post data by post slug and type.
   *
   * @param string $post_slug Page Slug to do Query by.
   * @param string $post_type Page Type to do Query by.
   * @return array
   *
   * @since  1.0.0
   */
  public function get_page_data_by_slug( $post_slug = null, $post_type = null ) {
    if( ! $post_slug || ! $post_type ) {
      return false;
    }

    $page_output = '';

    $args = array(
      'name'           => $post_slug,
      'post_type'      => $post_type,
      'posts_per_page' => 1,
      'no_found_rows'  => true
    );

    $the_query = new WP_Query( $args );

    if ( $the_query->have_posts() ) {
      while ( $the_query->have_posts() ) {
        $the_query->the_post();
        $page_output = $the_query->post;
      }
      wp_reset_postdata();
    }
    return $page_output;
  }

  /**
   * Return Page in JSON format
   *
   * @param string $post_slug Page Slug.
   * @param string $post_type Page Type.
   * @return json
   *
   * @since  1.0.0
   */
  public function get_json_page( $post_slug = null, $post_type = null ) {
    if( ! $post_slug || ! $post_type ) {
      return false;
    }

    return wp_json_encode( $this->get_page_data_by_slug( $post_slug, $post_type ) );
  }

  /**
   * Update Page to transient for caching on action hooks save_post.
   *
   * @param int $post_id Saved Post ID provided by action hook.
   *
   * @since 1.0.0
   */
  public function update_page_transient( $post_id ) {

    $post_status = get_post_status( $post_id );
    $post        = get_post( $post_id );
    $post_slug   = $post->post_name;
    $post_type   = $post->post_type;
    $cache_name  = $this->get_page_cache_name_by_slug( $post_slug, $post_type );

    if( ! $cache_name ) {
      return false;
    }

    if( $post_status === 'auto-draft' || $post_status === 'inherit' ) {
      return false;
    } else if( $post_status === 'trash' ) {
      delete_transient( $cache_name );
    } else  {
      if( $this->is_post_type_allowed_to_save( $post_type ) ) {
        $cache = $this->get_json_page( $post_slug, $post_type );
        set_transient( $cache_name, $cache, 0 );
      }
    }
  }
}

$init = new Init();

add_action( 'save_post', [ $init, 'update_page_transient' ] );

The helper methods in the above code will enable us to do some caching:

  • get_allowed_post_types()
    This method lets post types know that we want to enable showing in our custom ‘endpoint’. You can extend this, and the plugin we’ve actually made this method filterable so that you can just use a filter to add additional items.
  • is_post_type_allowed_to_save()
    This method simply checks to see if the post type we’re trying to fetch the data from is in the allowed array specified by the previous method.
  • get_page_cache_name_by_slug()
    This method will return the name of the transient that the data will be fetched from.
  • get_page_data_by_slug()
    This method is the method that will perform the WP_Query on the post via its slug and post type and return the contents of the post array that we’ll convert with the JSON using the get_json_page() method.
  • update_page_transient()
    This will be run on the save_post hook and will overwrite the transient in the database with the JSON data of our post. This last method is known as the “key caching method”.

Let’s explain transients in more depth.

Transients API

Transients API is used to store data in the options table of your WordPress database for a specific period of time. It’s a persisted object cache, meaning that you are storing some object, for example, results of big and slow queries or full pages that can be persisted across page loads. It is similar to regular WordPress Object Cache, but unlike WP_Cache, transients will persist data across page loads, where WP_Cache (storing the data in memory) will only hold the data for the duration of a request.

It’s a key-value store, meaning that we can easily and quickly fetch the desired data, similar to what in-memory caching systems like Memcached or Redis do. The difference is that you’d usually need to install those separately on the server (which can be an issue on shared servers), whereas transients are built in with WordPress.

As noted on its Codex page — transients are inherently sped up by caching plugins. Since they can store transients in memory instead of a database. The general rule is that you shouldn’t assume that transient is always present in the database — which is why it’s a good practice to check for its existence before fetching it

$transient_name = get_transient( 'transient_name' );
if ( $transient_name === false ) {
  set_transient( 'transient_name', $transient_data, $transient_expiry );
}

You can use it without expiration (like we are doing), and that’s why we implemented a sort of ‘cache-busting’ on post save. In addition to all the great functionality they provide, they can hold up to 4GB of data in it, but we don’t recommend storing anything that big in a single database field.

Recommended reading: Be Watchful: PHP And WordPress Functions That Can Make Your Site Insecure

Final Endpoint: Testing And Verification

The last piece of the puzzle that we need is an ‘endpoint’. I’m using the term endpoint here, even though it’s not an endpoint since we are directly calling a specific file to fetch our results. So we can create a test.php file that looks like this:

get_page_cache_name_by_slug( $post_slug, $post_type ) );

// Return error on false.
if ( $cache === false ) {
  wp_send_json( 'Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!' );
}

// Decode json for output.
wp_send_json( json_decode( $cache ) );

If we go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php, we’ll see this message:

Error, page slug or type is missing!

So, we’ll need to specify the post type and post slug. When we now go to http://dev.wordpress.test/wp-content/plugins/json-transient/test.php?slug=hello-world&type=post we’ll see:

Error, the page does not exist or it is not cached correctly. Please try rebuilding cache and try again!

Oh, wait! We need to re-save our pages and posts first. So when you’re starting out, this can be easy. But if you already have 100+ pages or posts, this can be a challenging task. This is why we implemented a way to clear the transients in the Decoupled JSON Content plugin, and rebuild them in a batch.

But go ahead and re-save the Hello World post and then open the link again. What you should now have is something that looks like this:

{
  "ID": 1,
  "post_author": "1",
  "post_date": "2018-06-26 18:28:57",
  "post_date_gmt": "2018-06-26 18:28:57",
  "post_content": "Welcome to WordPress. This is your first post. Edit or delete it, then start writing!",
  "post_title": "Hello world!",
  "post_excerpt": "",
  "post_status": "publish",
  "comment_status": "open",
  "ping_status": "open",
  "post_password": "",
  "post_name": "hello-world",
  "to_ping": "",
  "pinged": "",
  "post_modified": "2018-06-30 08:34:52",
  "post_modified_gmt": "2018-06-30 08:34:52",
  "post_content_filtered": "",
  "post_parent": 0,
  "guid": "http://dev.wordpress.test/?p=1",
  "menu_order": 0,
  "post_type": "post",
  "post_mime_type": "",
  "comment_count": "1",
  "filter": "raw"
}

And that’s it. The plugin we made has some more extra functionality that you can use, but in a nutshell, this is how you can fetch the JSON data from your WordPress that is way faster than using REST API.

Before And After: Improved Response Time

We conducted testing in Chrome, where we could see the total response time and the TTFB separately. We tested response times ten times in a row: first without plugins and then with the plugins added. Also, we tested the response for a list of posts and for a single post.

The results of the test are illustrated in the tables below:


Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster
Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach without added plugins. The decoupled approach is 2 to 3 times faster. (Large preview)

Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster.
Comparison graph depicting response times of using WordPress REST API vs using the decoupled approach with added plugins. The decoupled approach is up to 8 times faster. (Large preview)

As you can see, the difference is drastic.

Security Concerns

There are some caveats that you’ll need to take a good look at. First of all, we are manually loading WordPress core files, which in the WordPress world is a big no-no. Why? Well, besides the fact that manually fetching core files can be tricky (especially if you’re using nonstandard installations such as Bedrock), it could pose some security concerns.

If you decide to use the method described in this article, be sure you know how to fortify your server security.

First, add HTML headers like in the test.php file:

header( 'Access-Control-Allow-Origin: your-front-end-app.url' );

header( 'Content-Type: application/json' );

The first header is a way to bypass CORS security measure so that only your front-end app can fetch the contents when going to the specified file.

Second, disable directory traversal of your app. You can do this by modifying nginx settings, or add Options -Indexes to your .htaccess file if you’re on an Apache server.

Adding a token check to the response is also a good measure that can prevent unwanted access. We are actually working on a way to modify our Decoupled JSON plugin so that we can include these security measures by default.

A check for an Authorization header sent by the frontend app could look like this:

if ( ! isset( $_SERVER['HTTP_AUTHORIZATION'] ) ) {
  return;
}

$auth_header = $_SERVER['HTTP_AUTHORIZATION'];

Then you can check if the specific token (a secret that is only shared by the front- and back-end apps) is provided and correct.

Conclusion

REST API is great because it can be used to create fully-fledged apps — creating, retrieving, updating and deleting the data. The downside of using it is its speed.

Obviously, creating an app is different than creating a classic website. You probably won’t need all the plugins we installed. But if you just need the data for presentational purposes, caching data and serving it in a custom file seems like the perfect solution at the moment, when working with decoupled sites.

You may be thinking that creating a custom plugin to speed up the website response time is an overkill, but we live in a world in which every second counts. Everyone knows that if a website is slow, users will abandon it. There are many studies that demonstrate the connection between website performance and conversion rates. But if you still need convincing, Google penalizes slow websites.

The method explained in this article solves the speed issue that the WordPress REST API encounters and will give you an extra boost when working on a decoupled WordPress project. As we are on our never-ending quest to squeeze out that last millisecond out of every request and response, we plan to optimize the plugin even more. In the meantime, please share your ideas on speeding up decoupled WordPress!

Smashing Editorial
(md, ra, yk, il)

Source: Smashing Magazine, Headless WordPress: The Ups And Downs Of Creating A Decoupled WordPress

Collective #462

dreamt up by webguru in Uncategorized | Comments Off on Collective #462
















C462_shapes

Gradient Shapes

Shapes generated with CSS background gradients by Yuan Chuan (includes conic gradients, which are currently Chrome only).

Check it out






C462_hexagon

Hexagon

A beautifully animated hexagon wave made by Misha Tsankashvili with CSS only.

Check it out


Collective #462 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #462

Video Playback On The Web: Video Delivery Best Practices (Part 2)

dreamt up by webguru in Uncategorized | Comments Off on Video Playback On The Web: Video Delivery Best Practices (Part 2)

Video Playback On The Web: Video Delivery Best Practices (Part 2)

Video Playback On The Web: Video Delivery Best Practices (Part 2)

Doug Sillars



In my previous post, I examined video trends on the web today, using data from the HTTP Archive. I found that many websites serve the same video content on mobile and desktop, and that many video streams are being delivered at bitrates that are too high to playback on 3G speed connections. We also discovered that may websites automatically download video to mobile devices — damaging customer’s data plans, battery life, for videos that might not ever be played.

TL;DR: In this post, we look at techniques to optimize the speed and delivery of video to your customers, and provide a list of 9 best practices to help you deliver your video assets.

Video Playback Metrics

There are 3 principal video playback metrics in use today:

  1. Video Startup Time
  2. Video Stalling
  3. Video Quality

Since video files are large, optimizing the video to be as small as possible will lead to faster video delivery, speeding up video start, lowering the number of stalls, and minimizing the effect of the quality of the video delivered. Of course, we need to balance startup speed and stalling with the third metric of quality (and higher quality videos generally use more data).

Video Startup

When a user presses play on a video, they expect to be able to watch the video quickly. According to Conviva (a leader in video metric analysis), in Q1 of 2018, 14% of videos never started playing (that’s 2.4 Billion video plays) after the user pressed play.


Pie chart showing that nearly 15% of all videos fail to play
Video Start Breakdown (Large preview)

2.3% of videos (400M video requests) failed to play after the user pressed the play button. 11.54% (2B plays) were abandoned by the user after pressing play. Let’s try to break down what might be causing these issues.

Video Playback Failure

Video playback failure accounted for 2.3% of all video plays. What could lead to this? In the HTTP Archive data, we see 0.3% of all video requests resulting in a 4xx or 5xx HTTP response — so some percentage fail to bad URLs or server misconfigurations. Another potential issue (that is not observed in the HTTP Archive data) are videos that are blocked by Geolocation (blocked based on the location of the viewer and the licensing of the provider to display the video in that locale).

Video Playback Abandonment

The Conviva report states that 11.5% of all video plays would play, but that the customer abandoned the playback before the video started playing. The issue here is that the video is not being delivered to the customer fast enough, and they give up. There are many studies on the mobile web where long delays cause abandonment of web pages, and it appears that the same effect occurs with video playback as well.

Research from Akamai shows that viewers will wait for 2 seconds, but for each subsequent second, 5.8% of viewers abandon the video.


Chart displaying the abandonment rate as startup time is longer.
Rate of abandonment over time (Large preview)

So what leads to video playback issues? In general, larger files take longer to download, so will delay playback. Let’s look at a few ways that one can speed up the playback of videos. To reduce the number of videos abandoned at startup, we should ‘slim’ down these files as best as possible, so they download (and begin playback) quickly.

MP4: Video Preload

To ensure fast playback on the web, one option is to preload the video onto the device in advance. That way, when your customer clicks ‘play’ the video is already downloaded, and playback will be fast. HTML offers a preload attribute with 3 possible options: auto, metadata and none.

preload = auto

When your video is delivered with preload="auto", the browser downloads the entire video file and stores it locally. This permits a large performance improvement for video startup, since the video is available locally on the device, and no network interference will slow the startup.

However, preload="auto" should only be used if there is a high probability that the video will be viewed. If the video is simply resident on your webpage, and it is downloaded each time, this will add a large data penalty to your mobile users, as well as increase your server/CDN costs for delivering the entire video to all of your users.

This website has a section entitled “Video Gallery” with several videos. Each video in this section has preload set to auto, and we can visualize their download in the WebPageTest waterfall as green horizontal lines:


A WebPageTEst Waterfall chart
Waterfall of video preload (Large preview)

There is a section called “Video Gallery”, and the files for this small section of the website account for 14.6M (83%) of the page download. The odds that one (of many) videos will be played is probably pretty low, and so utilizing preload="auto" only generates a lot of data traffic for the site.


Pie Chart showing the high percentage (83%) of video usage.
Webpage data breakdown (Large preview)

In this case, it is unlikely that even one of these videos will be viewed, yet all of them are downloaded completely, adding 14.8MB of content to the mobile site (83% of the content on the page). For videos that are have a high probability of playback (perhaps >90% of page views result in video play) — preloading the entire video is a very good idea. But for videos that are unlikely to be played, preload="auto" will only cause extra tonnage of content through your servers and to your customer’s mobile (and desktop) devices.

preload="metadata"

When the preload="metadata" attribute is used, an initial segment of the video is downloaded. This allows the player to know the size of the video window, and to perhaps have a second or 2 of video downloaded for immediate playback. The browser simply makes a 206 (partial request) of the video content. By storing a small bit of video data on the device, video startup time is decreased, without a large impact to the amount of data transferred.

On Chrome, metadata is the default choice if no attribute is chosen.

Note: This can still lead to a large amount of video to be downloaded, if the video is large.

For example, on a mobile website with a video set at preload="metadata", we see just one request for video:


Webpage Test Waterfall chart
(Large preview)

And the request is a partial download, but it still results in 2.7 MB of video to be downloaded because the full video is 1080p, 150s long and 97 MB (we’ll talk about optimizing video size in the next sections).


Pie chart showing that 2.7 MB or 42% of data is still vide with preload=metadata.
KB usage with video metadata (Large preview)

So, I would recommend that preload="metadata" still only be used when there is a fairly high probability that the video will be viewed by your users, or if the video is small.

preload="none"

The most economical download option for videos, as no video files are downloaded when the page is loaded. This will potentially add a delay in playback, but will result in faster initial page load For sites with many videos on a single page, it may make sense to add a poster to the video window, and not download any of the video until it is expressly requested by the end user. All YouTube videos that are embedded on websites never download any video content until the play button is pressed, essentially behaving as if preload="none".

Preload Best Practice: Only use preload="auto" if there is a high probability that the video will be watched. In general, the use of preload="metadata" provides a good balance in data usage vs. startup time, but should be monitored for excessive data usage.

MP4 Video Playback Tips

Now that the video has started, how can we ensure that the video playback can be optimized to not stall and continue playing. Again, the trick is to make sure the video is as small as possible.

Let’s look at some tricks to optimize the size of video downloads. There are several dimensions of video that can be optimized to reduce the size of the video:

Audio

Video files are split into different “streams” — the most common being the video stream. The second most common stream is the audio track that syncs to the video. In some video playback applications, the audio stream is delivered separately; this allows for different languages to be delivered in s seamless manner.

If your video is played back in a silent manner (like a looping GIF, or a background video), removing the audio stream from the video is a quick and easy way to reduce the file size. In one example of a background video, the full file was 5.3 MB, but the audio track (which is never heard) was nearly 300 KB (5% of the file) By simple eliminating the audio, the file will be delivered quickly without wasting bytes.

42% of the MP4 files found on the HTTP Archive have no audio stream.

Best Practice: Remove the audio tracks from videos that are played silently.

Video Encoding

When encoding a video, there are options to reduce the video quality (number of pixels per frame, or the frames per second). Reducing a high-quality video to be suitable for the web is easy to do, and generally does not affect the quality delivered to your end users. This article is not long enough for an in depth discussion of all the various compression techniques available for video. In x264 and x265 encoders, there is a term called the Constant Rate Factor (CRF). Using a CRF of 23-28 will generally give a good compression/quality trade off, and is a great first start into the realm of video compression

Video Size

Video size can be affected by many dimensions: length, width, and height (you could probably include audio here as well).

Video Duration

The length of the video is generally not a feature that a web developer can adjust. If the video is going to playback for three minutes, it is going to playback for three minutes. In cases in which the video is exceptionally long, tools like preload="none" or streaming the video can allow for a smaller amount of data to be downloaded initially to reduce page load time.

Video Dimensions

18% of all video found in the HTTP Archive is identical on mobile and desktop. Those who have worked with responsive web design know how optimizing images for different viewports can drastically reduce load times since the size of the images is much smaller for smaller screens.

The same holds for video. A website with a 30 MB 2560×1226 background video will have a hard time downloading the video on mobile (probably on desktop, too!). Resizing the video drastically decreases the files size, and might even allow for three or four different background videos to be served:

Width Video (MB)
1226 30
1080 8.1
720 43
608 3.3
405 1.76

Now, unfortunately, browsers do not support media queries for video in HTML, meaning that this just does not work:

<video preload="auto" autoplay muted controls
    source sizes="(max-width:1400px 100vw, 1400px"
    srcset="small.mp4 200w,
            medium.mp4 800w,
            large.mp4 1400w"
    src="large.mp4"

</video>

Therefore, we’ll need to create a small JS wrapper to deliver the videos we want to different screen sizes. But before we go there…

Downloading Video, But Hiding It From View

Another throwback to the early responsive web is to download full-size images, but to hide them on mobile devices. Your customers get all the delay for downloading the large images (and hit to mobile data plan, and extra battery drain, etc.), and none of the benefit of actually seeing the image. This occurs quite frequently with video on mobile. So, as we write our script, we can ensure that smaller screens never request the video that will not appear in the first place.

Retina Quality Videos

You may have different videos for different device screen densities. This can lead to added time to download the videos to your mobile customers. You may wish to prevent retina videos on smaller screen devices, or on devices with a limited network bandwidth, falling to back to standard quality videos for these devices. Tools like the Network Information API can provide you with the network throughput, and help you decide which video quality you’d like to serve to your customer.

Downloading Different Video Types Based On Device Size And Network Quality

We’ve just covered a few ways to optimize the delivery of movies to smaller screens, and also noted the inability of the video tag to choose between video types, so here is a quick JS snippet that will use the screen width to:

  • Not deliver video on screens below 500px;
  • Deliver small videos for screens 500-1400;
  • Deliver a larger sized video to all other devices.
<html><body>
//get screen width and pixel ratio var width = screen.width; var dpr = window.devicePixelRatio; //initialise 2 videos — //“small” is 960 pixels wide (2.6 MB), large is 1920 pixels wide (10 MB) var smallVideo="http://res.cloudinary.com/dougsillars/video/upload/w_960/v1534228645/30s4kbbb_oblsgc.mp4"; var bigVideo = "http://res.cloudinary.com/dougsillars/video/upload/w_1920/v1534228645/30s4kbbb_oblsgc.mp4"; //TODO add logic on adding retina videos if (width"; console.log(videoTag); document.getElementById('video').innerHTML = videoTag; document.getElementById('text').innerHTML = "This is a small video."; } else{ var videoTag = "

This script divides user’s screens into three options:

  1. Under 500 pixels, no video is shown.
  2. Between 500 and 1400, we have a smaller video.
  3. For larger than 1400 pixel wide screens, we have a larger video.

Our page has a responsive video with two different sizes: one for mobile, and another for desktop-sized screens. Mobile users get great video quality, but the file is only 2.6 MB, compared to the 10MB video for desktop.

Animated GIFs

Animated GIFs are big files. While both aGIFs and video files compress the data through width and height dimensions, only video files have compression (on the often larger) time axis. aGIFs are essentially “flipping through” static GIF images quickly. This lack of compression adds a significant amount of data. Thankfully, it is possible to replace aGIFs with a looping video, potentially saving MBs of data for each request.

<video loop autoplay muted src="pseudoGif.mp4">

In Safari, there is an even fancier approach: You can place a looping mp4 in the picture tag, like so:

<picture>
  <source type="video/mp4" loop autoplay srcset="loopingmp4.mp4">
  <source type="image/webp"  srcset="animated.webp">
  <src="animated.gif">
</picture>

In this case, Safari will play the animated GIF, while Chrome (and other browsers that support WebP) will play the animated WebP, with a fallback to the animated GIF. You can read more about this approach in Colin Bendell’s great post.

Third-Party Videos

One of the easiest ways to add video to your website is to simply copy/paste the code from a video sharing service and put it on your site. However, just like adding any third party to your site, you need to be vigilant about what kind of content is added to your page, and how that will affect page load. Many of these “simply paste this into your HTML” widgets add 100s of KB of JavaScript. Others will download the entire movie (think preload="auto"), and some will do both.

Third-Party Video Best Practice: Trust but verify. Examine how much content is added, and how much it affects your page load time. Also, the behavior might change, so track with your analytics regularly.

Streaming Startup

When a video stream is requested, the server supplies a manifest file to the player, listing every available stream (with dimensions and bitrate information). In HLS streaming, the player generally chooses the first stream in the list to begin playback. Therefore, the stream positioned first in the manifest file should be optimized for video startup on both mobile and desktop (or perhaps alternative manifest files should be delivered to mobile vs. desktop).

In most cases, the startup is optimized by using a lower quality stream to begin playback. Once the player downloads a few segments, it has a better idea of available throughput and can select a higher quality stream for later segments. As a user, you have likely seen this — where the first few seconds of a video looks very pixelated, but a few seconds into playback the video sharpens.

In examining 1,065 manifest files delivered to mobile devices from the HTTP Archive, we find that 59% of videos have an initial bitrate under 1.2 MBPS — and are likely to start streaming without any much delay at 1.6 MBPS 3G data rates. 11% use a bitrate between 1.2 and 1.6 MBPS — which may slow the startup on 3G, and 30% have a bitrate above 1.6 MBPS — and are unable to playback at this bitrate on a 3G connection. Based on this data, it appears that ~41% of all videos will not be able to sustain the initial bitrate on mobile — adding to startup delay, and possibly increased number of stalls during playback.


Column chart showing initial bitrates in streaming videos. Many videos have too high an initial bitrate to stream on mobile.
Initial bitrate for video streams (Large preview)

Streaming Startup Best Practice: Ensure your initial bitrate in the manifest file is one that will work for most of your customers. If the player has to change streams during startup, playback will be delayed and you will lose video views.

So, what happens when the video’s bitrate is near (or above) the available throughput? After a few seconds of download without a completed video segment ready for playback, the player stops the download and chooses a lower quality bitrate video, and begins the process again. The action of downloading a video segment and then abandoning leads to additional startup delay, which will lead to video abandonment.

We can visualize this by building video manifests with different initial bitrates. We test 3 different scenarios: starting with the lowest (215 KBPS), middle (600 KBPS), and highest bitrate (2.6 MBPS).

When beginning with the lowest quality video, playback begins at 11s. After a few seconds, the player begins requesting a higher quality stream, and the picture sharpens.

When starting with the highest bitrate (testing on a 3G connection at 1.6 MBPS), the player quickly realizes that playback cannot occur, and switches to the lowest bitrate video (215 KBPS). The video starts playing at 17s. There is a 6-second delay, and the video quality is the same low quality delivered to in the first test.

Using the middle-quality video allows for a bit of a tradeoff, the video begins playing at 13s (2 seconds slower), but is high quality from the start -and there is no jump from pixelated to higher quality video.

Best Practice for Video Startup: For fastest playback, start with the lowest quality stream. For longer videos, you might consider using a ‘middle quality’ stream at start to deliver sharp video at startup (with a slightly longer delay).


Thumbnails of 3 pages with video loading.
WebPage Test Thumbnails (Large preview)

WebPageTest results: Initial video stream is low, middle and high (from top to bottom). The video starts the fastest with the lowest quality video. It is important to note that the high quality start video at 17s is the same quality as the low quality start at 11s.

Streaming: Continuing Playback

When the video player can determine the optimal video stream for playback and the stream is lower than the available network speed, the video will playback with no issues. There are tricks that can help ensure that the video will deliver in an optimal manner. If we examine the following manifest entry:

#EXT-X-STREAM-INF:BANDWIDTH=912912,PROGRAM-ID=1,CODECS="avc1.42c01e,mp4a.40.2",RESOLUTION=640x360,SUBTITLES="subs"
video/600k.m3u8

The information line reports that this stream has a 913 KBPS bitrate, and 640×360 resolution. If we look at the URL that this line points to, we see that it references a 600k video. Examining the video files shows that the video is 600 KBPS, and the manifest is overstating the bitrate.

Overstating The Video Bitrate

  • PRO
    Overstating the bitrate will ensure that when the player chooses a stream, the video will download faster than expected, and the buffer will fill up faster than expected, reducing the possibility of a stall.
  • CON
    By overstating the bitrate, the video delivered will be a lower quality stream. If we look at the entire list of reported vs. actual bitrates:
Reported (KBS) Actual Resolution
913 600 640×360
142 64 320×180
297 180 512×288
506 320 512×288
689 450 412×288
1410 950 853×480
2090 1500 1280×720

For users on a 1.6 MBPS connection, the player will choose the 913 KBPS bitrate, serving the customer 600 KBPS video. However, if the bitrates had been reported accurately, the 950 KBPS bitrate would be used, and would likely have streamed with no issues. While the choices here prevent stalls, they also lower the quality of the delivered video to the consumer.

Best Practice: A small overstatement of video bitrate may be useful to reduce the number of stalls in playback. However, too large a value can lead to reduced quality playback.

Test the Neilsen video in the browser, and see if you can make it jump back and forth.

Conclusion

In this post, we’ve walked through a number of ways to optimize the videos that you present on your websites. By following the best practices illustrated in this post:

  1. preload="auto"
    Only use if there is a high probability that this video will be watched by your customers.
  2. preload="metadata"
    Default in Chrome, but can still lead to large video file downloads. Use with caution.
  3. Silent Videos (looping GIFs or background videos)
    Strip out the audio channel
  4. Video Dimensions
    Consider delivering differently sized video to mobile over desktop. The videos will be smaller, download faster, and your users are unlikely to see the difference (your server load will go down too!)
  5. Video Compression
    Don’t forget to compress the videos to ensure that they are delivered
  6. Don’t ‘hide’ videos
    If the video will not be displayed — don’t download it.
  7. Audit your third-party videos regularly
  8. Streaming
    Start with a lower quality stream to ensure fast startup. (For longer play videos, consider a medium bitrate for better quality at startup)
  9. Streaming
    It’s OK to be conservative on bitrate to prevent stalling, but go too far, and the streams will deliver a lower quality video.

You will find that the video on your page is streamlined for optimal delivery and that your customers will not only delight in the video you present but also enjoy a faster page load time overall.

Smashing Editorial
(dm, ra, il)

Source: Smashing Magazine, Video Playback On The Web: Video Delivery Best Practices (Part 2)

Video Playback On The Web: The Current State Of Video (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on Video Playback On The Web: The Current State Of Video (Part 1)

Video Playback On The Web: The Current State Of Video (Part 1)

Video Playback On The Web: The Current State Of Video (Part 1)

Doug Sillars



Usage of video on the web is increasing as devices and networks become faster and more capable of handling video content. Research shows that sites with video increase engagement by 80%. E-Commerce sites with video have higher conversions than sites without video.

But adding video can come at a cost. Videos (being larger files) add to the page load time, and performance research shows that slower pages have the opposite effect of lower customer engagement and conversions. In this aticle, I’ll examine the important metrics to balance performance and video playback on the web, look at how video is being used today, and provide best practices on delivering video on the web.

One of the first steps to improve customer satisfaction is to speed up the load time of a page. Google has shown that mobile pages that take over three seconds to load lose 53% of their audience to abandonment. Other studies find that on improving site performance, usage and sales increase.

Adding video to a website will increase engagement, but it can also dramatically slow down the load time, so it is clear that a balance must be found between adding videos to your site and not impacting the load time too greatly.

Recommended reading: Front-End Performance Checklist 2018 [PDF, Apple Pages]

Video On The Web Today

To examine the state of video on the web today, I’ll use data from the HTTP Archive. The HTTP Archive uses WebPageTest to scan the performance of 1.2 million mobile and desktop websites every two weeks, and then stores the data in Google BigQuery.

Typically just the main page of each domain is checked (meaning www.cnn.com is run, but www.cnn.com/politics is not). This data can help us understand how the usage of video on the web affects the performance of websites. Mobile tests are run on an emulated Motorola G4 with a 3G internet connection (1.6 MBPS down, 768 KBPS up, 300 ms RTT), and desktop tests run Chrome on a cable connection (5 MBPS down, 1 MBPS up, 28ms RTT). I’ll be using the data set from 1 August 2018.

Sites That Download Video

As a first step to study sites with video, we should look at sites that download video files when the page loads. There are 35k mobile sites and 55k desktop sites with video file downloads in the HTTP Archive data set (that’s 3% of all mobile sites and 4.5% of all desktop sites). Comparing desktop to mobile, we find that 30k of these sites have video on both mobile and desktop (leaving ~5,800 sites on mobile with no video on the desktop).


Pi chart showing overlap of mobile and desktop video sites
Mobile and Desktop Sites with Video (Large preview)

The median mobile page with video weighs in at a hefty 7 MB (583% larger than 1.2 MB found for the median mobile site). This increase is not fully accounted for by video alone (2.5 MB). As sites with video tend to be more feature rich and visually engaging, they also use more images (the median site has over 1 MB more), CSS, and Javascript. The table below also shows that the median SpeedIndex (a measurement of how quickly content appears on the screen) for sites with video is 3.7s slower than a typical mobile site, taking a whopping 11.5 seconds to load.

SpeedIndex Bytes Total Bytes Video Bytes CSS Bytes Images Bytes JS
Video 11544 6,963,579 2,526,098 80,327 1,596,062 708,978
all sites 7780 1,201,802 0 40,648 449,585 336,973

This clearly shows that sites that are more interactive and have video content take (on average) longer to load that sites without video. But can we speed up video delivery? What else can we learn from the data at hand?

Video Hosting

When examining video delivery, are the files being served from a CDN or video provider, or are developers hosting the videos on their own servers? By examining the domain of the videos delivered on mobile, we find that 12,163 domains are used to deliver video, indicating that ~49% of sites are serving their own video files. If we stack rank the domains by frequency, we are able to determine the most common video hosting solutions:

Video Doman cnt %
fbcdn.net 116788 67%
akamaihd.net 11170 6%
googlevideo.com 10394 6%
cloudinary.com 3170 2%
amazonaws.com 1939 1%
cloudfront.net 1896 1%
pixfs.net 1853 1%
akamaized.net 1573 1%
tedcdn.com 1507 1%
contentabc.com 1507 1%
vimeocdn.ccom 1373 1%
dailymotion.com 1337 1%
teads.tv 1022 1%
youtube.com 1007 1%
adstatic.com 998 1%

Top CDNs and domains Facebook, Akamai, Google, Cloudinary, AWS, and Cloudfront lead the way, which is not surprising. However, it is interesting to see YouTube and Vimeo so far down in the list, as they are two of the most popular video sharing sites.

Let’s look into YouTube, Vimeo and Facebook video delivery:

YouTube Video Counts

By default, pages with a YouTube video embedded do not actually download any video files — just scripts and a placeholder image, so they do not show up in a querly looking for sites with video downloads. One of the Javascript downloads for the YouTube Video player is www-embed-player.js. Searching for this file, we find 69k instances on 66,647 mobile sites. These sites have a median SpeedIndex of 10,700, and data usage of 3.31MB — better than sites with video downloaded, but still slower than sites with no video at all. The increase in data is directly associated with more images and Javascript (as shown below).

Speedindex Bytes Total Bytes Video Bytes CSS Bytes Images Bytes JS
Video 11544 6,963,579 2,526,098 80,327 1,596,062 708,978
All sites 7780 1,201,802 0 40,648 449,585 336,973
YouTube script 10700 3,310,000 0 126,314 1,733,473 1,005,758

Vimeo Video Counts

There are 14,148 requests for Vimeo videos in HTTP Archive for Video playback. I see only 5,848 requests for the player.js file (in the format https://f.vimeocdn.com/p/3.2.0/js/player.js — implying that perhaps there are many videos on one page, or perhaps another location for the video player file. There are 17 different versions of the player present into HTTP Archive, with the most popular being 3.1.5 and 3.1.4:

URL cnt
https://f.vimeocdn.com/p/3.1.5/js/player.js 1832
https://f.vimeocdn.com/p/3.1.4/js/player.js 1057
https://f.vimeocdn.com/p/3.1.17/js/player.js 730
https://f.vimeocdn.com/p/3.1.8/js/player.js 507
https://f.vimeocdn.com/p/3.1.10/js/player.js 432
https://f.vimeocdn.com/p/3.1.15/js/player.js 352
https://f.vimeocdn.com/p/3.1.19/js/player.js 153
https://f.vimeocdn.com/p/3.1.2/js/player.js 117
https://f.vimeocdn.com/p/3.1.13/js/player.js 105

There does not appear to be any performance gain for any Vimeo Library — all of the pages have similar load times.

Note: Using www-embed-player.js for YouTube or https://f.vimeocdn.com/p/*/js/player.js for Vimeo are good fingerprints for browsers with a clean cache, but if the customer has previously browsed a site with an embedded video, this file might already be in the browser cache, and thus will not be re-requested. But, as Andy Davies recently noted, this is not a safe assumption to make.

Facebook Video Counts

It is surprising that in the HTTP Archive data, 67% of all video requests are from Facebook’s CDN. It turns out that on Chrome, 3rd party Facebook widgets download 30% of all videos posted inside the widget (This effect does not occur in Safari or in Firefox.). It turns out that a 3rd party widget added with just a few lines of code is responsible for 57% of all the video seen in the HTTP Archive.

Video File Types

The majority of videos on pages tested are Mp4s. If we look at all the videos downloaded (excluding those from Facebook), we get the following view:

File extension Video count %
.mp4 48,448 53%
.ts 18,026 20%
.webm 3,946 4%
14,926 16%
.m4s 2,017 2%
.mpg 1,431 2%
.mov 441 0%
.m4v 407 0%
.swf 251 0%

Of the files with no extension — 10k are googlevideo.com files.

What can we learn about these files? Let’s look each file type to learn more about the content being delivered.

I used FFPROBE to query the 34k unique MP4 files, and obtained data for 14,700 videos (many of the videos had changed or been removed in the 3 weeks from HTTP Archive capture to analysis).

MP4 Video Data

Of the 14.7k videos in the dataset, 8,564 have audio tracks (58%). Shorter videos that autoplay or videos that play in the background do not require audio, so stripping the audio track is a great way to reduce the file size (and speed the delivery) of your videos.

The next most important aspect to quickly downloading a video are the dimensions. The larger the dimensions (and in the case of video, there are three dimensions to consider: width × height × time), the larger the video file will be.

MP4 Video Duration

Most of the 14k videos studied are short: the median (50th percentile) duration is 21s. However, 10% of the videos surveyed are over 2 minutes in duration. Use cases here will, of course, be divided, but for short video loops, or background videos — shorter videos will use less data, and download faster.


Column Chart breaking down video length in 10 second segments
Distribution of Video Duration (Large preview)

MP4 Video Width And Height

The dimensions of the video on the screen decide how many pixels each frame will have to use. The chart below shows the various video widths that are being served to the mobile device. (As a note, the Moto G4 has a screen size of 1080×1920, and the pages are all viewed in portrait mode).


A column chart displaying the count of each video width observed in the data set
Video Counts by Width (Large preview)

As the data shows, the two most utilized video widths are significantly larger than the G4 screen (when held in portrait mode). A full 49% of all videos are served with a width greater than 1080 pixels wide. The Alcatel 1x, a new Android Go device, has a 480×960-pixel screen. 77% of the videos delivered in the sample set are larger than 480 pixels wide.

As dimensions of videos decrease, so does the files size (and thus time to deliver the video). This is the primary reason to resize videos.

Why are these videos so large? If we correlate the videos served on mobile and desktop, we find that 18% of videos served on mobile are the same videos served on the desktop. This is a ‘problem’ solved for images years ago with responsive images. By serving differently sized videos to different sized devices, we can ensure that beautiful videos are served, but at a size and dimension that makes sense for the device.

MP4 Video Bitrate

The bitrate of the video delivered to the device plays a large effect on how well the video will play back. The HTTP Archive tests are run on a 3G connection at 1.6 MBPS download speed. To playback (without stalling) the download has to be faster than playback. Let’s provide 80% of the available bitrate to video files (1.3 MBPS). 47% of the videos in the sample set have a bitrate over 1.3 MBPS, meaning that when these videos are played on a 3G connection, they are more likely to stall — leading to unhappy customers. 27% of videos have a bitrate higher than 2.5 MBPS, 10% are higher than 5 MBPS, and 35 of videos served to mobile devices have a bitrate > 10 MBPS. These larger videos are unlikely to play without stalling on many connections — fixed or mobile.


Column chart listing video bitrates in 500 KBPS buckets.
Observed Video Bitrates (Large preview)

What Leads To Higher Bitrates

Larger videos tend to carry a larger bitrate, as larger dimensioned videos require a lot more data to populate the additional pixels on the device. Cross referencing the bitrate of each video to the width confirms this: videos with width 1280 (orange) and 1920 (gray) have a much broader distribution of bitrates (more data points to the right in the chart). The column marked in yellow denotes the 136 videos with width 1920, and a bitrate between 10-11 MBPS.


3D Column chart showing how bitrate and video size are related
Bitrate Vs. Video Width (Large preview)

If we visualize only the videos over 1.6 MBPS, it becomes clear that the higher screen resolutions (1280 and 1920) are responsible for the higher bitrate videos.


3D chart showing that larger width videos generally have higher bitrate
High Bitrate and Video Width (Large preview)

MP4: HTTP vs. HTTPS

HTTP2 has redefined content delivery with multiplexed connections — where just one connection per server is required. For large files like video, does HTTP2 provide a meaningful improvement to content delivery? If we look at the stats from the HTTP Archive:


Pie Chart of HTTP1 vs. HTTP2 for video delivery
HTTP1 vs HTTTP2 (Large preview)

Omitting the 116k Facebook videos (all sent via HTTP2), we see that it is about a 50:50 split between HTTP 1.1 and HTTP2. However, HTTP1.1 can utilize HTTPS, and when we filter for HTTPS usage, we find that 81% of video streams are sent via HTTPS, with HTTP2 being used slightly more than HTTPS1.1 (41%:36%)


Pie Chart further showing HTTP1 nonsecure and secure breakdown
HTTP vs. HTTP2 and secure (Large preview)

As you can see, comparing the speed of HTTP and HTTP2 video delivery is a work in progress.

HLS Video Streaming

Video streaming using adaptive bitrate is an ideal way to deliver video to the end user. Multiple versions of each video are built with different dimensions and bitrates. The list of available streams is presented to the playback device, and the video player on the device can choose the most appropriate stream based on the size of the device screen and the available network conditions. There are 1,065 manifest files (and 14k video stream files) in the HTTP Archive data set that I examined.

Video Stream Playback

One key metric in video streaming is to have the video start as quickly as possible. While the manifest file has a list of available streams, the player has no idea the available bandwidth of the network at the beginning of playback. To begin streaming, and because the player has to pick a stream, it typically chooses the first one in the list. In order to facilitate a fast video startup, it is important to place the correct stream at the top of your manifest file.

Note: Utilizing the Chrome Network Info API to generate manifest files on the fly might be a good way to quickly optimize video content at startup.

One way to ensure that the video starts quickly is to start with the lowest quality video segment, as the download will be the fastest. The initial video quality may be pixelated, but as the player better understands the network quality, it can quickly adjust to a more appropriate (hopefully higher quality) video stream. With that in mind, let’s look at the initial stream bitrates in the HTTP Archive.


Column chart showing initial bitrates in streaming videos. Many videos have too high an initial bitrate to stream on mobile.
Initial Bitrate for video streams (Large preview)

The red line in the above chart denotes 1.5 MBPS (recall that mobile tests are run at 1.6 MBPS, and not only video content is being downloaded). We see 30.5% of all of the streams (everything to the left of the line) start with an initial bitrate higher than 1.5 MBPS (and are thus unlikely to playback on a 3G connection) 17% start above 2 MBPS.

So what happens when video download is slower than the actual playback of the video? Initially, the player will attempt to download the (too) large bitrate files, but based on the download speed, will realise the problem. The player will then switch to downloading a lower bitrate video, so that download is faster than playback. The problem is that the initial download attempt takes time, and adding a delay to video playback start leads to customer abandonment.

We can also look at the most common bitrates of .ts files (the files that have the video content), to see what bitrates end up being downloaded on mobile. This data includes the initial bitrates, and any subsequent file downloaded during the WebPageTest run:


Column chart of observed bitrates once streaming begins
Observed Mobile Bitrates (Large preview)

We can see two major groupings of video streaming bitrates (100-300 KBPS, and a broader peak from 300-1,000 MBPS). This is where we would expect streams to appear, given that the network speed is capped at 1.6 MBPS.

Comparing the data to the desktop, Mobile clearly is higher at the lower bitrates, and desktop streams have high peaks in the 500-600 and 800-900 KBPS ranges, that are not seen in mobile.


Column chart comparing observed bitrates on mobile and desktop
Observed mobile and Desktop streaming bitrates (Large preview)

Column chart comparing initial bitrates with the observed bitrates for mobile and desktop.
Observed bitrates, mobile, desktop compared to initial bitrate (Large preview)

When we compare the initial bitrates observed (blue) with the actual files downloaded, it is very clear that for mobile the bitrate generally decreases during stream playback, indicating that lowering the initial bitrate for video streams might improve the performance of video startup and prevent stalls in early playback of the video. Desktop video also appears to decrease, but it is also possible that some video move to higher playback speeds.

Conclusion

Video content on the web increases customer engagement and satisfaction. Pages that load quickly have the same effect. The addition of video to your website will slow down the page rendering time, necessitating a balance between overall page load and video content. To reduce the size of your video content, ensure that you have versions appropriately sized for mobile device dimensions, and use shorter videos when possible.

If playback of your videos is not essential, follow the path of YouTube and Vimeo — download all the required pieces to be ready for playback, but don’t actually download any video segments until the user presses play. Finally — if you are streaming video — start with the lowest quality setting to ensure a fast video playback.

In my next post on video, I will take these general findings, and dig deeper into how to resolve potential issues with examples. Stay tuned!

Smashing Editorial
(dm, ra, il)

Source: Smashing Magazine, Video Playback On The Web: The Current State Of Video (Part 1)

Splicing HTML’s DNA With CSS Attribute Selectors

dreamt up by webguru in Uncategorized | Comments Off on Splicing HTML’s DNA With CSS Attribute Selectors

Splicing HTML’s DNA With CSS Attribute Selectors

Splicing HTML’s DNA With CSS Attribute Selectors

John Rhea



For most of my career, attribute selectors have been more magic than science. I’d stare, gobsmacked, at the CSS for outputting a link in a print style sheet, understanding nothing. I’d dutifully copy, and paste it into my print stylesheet then run off to put out whatever project was the largest burning trash heap.

But you don’t have to stare slack-jawed at CSS attribute selectors anymore. By the end of this article, you’ll use them to run diagnostics on your site, fix otherwise unsolvable problems, and generate technologic experiences so advanced they feel like magic. You may think I’m promising too much and you’re right, but once you understand the power of attribute selectors, you might feel like exaggerating yourself.

On the most basic level, you put an HTML attribute in square brackets and call it an attribute selector like so:

[href] {
   color: chartreuse;
}

The text of any element that has an href and doesn’t have a more specific selector will now magically turn chartreuse. Attribute selector specificity is the same as classes.

Note: For more on the cage match that is CSS specificity, you can read “CSS Specificity: Things You Should Know” or if you like Star Wars: “CSS Specificity Wars”.

But you can do far more with attribute selectors. Just like your DNA, they have built-in logic to help you choose all kinds of attribute combinations and values. Instead of only exact matching the way a tag, class, or id selector would, they can match any attribute and even string values within attributes.

Attribute Selection

Attribute selectors can live on their own or be more specific, i.e. if you need to select all div tags that had a title attribute.

div[title]

But you could also select the children of divs that have a title by doing the following:

div [title]

To be clear, no space between them means the attribute is on the same element (just like an element and class without a space), and a space between them means a descendant selector, i.e. selecting the element’s children who have the attribute.

You can get far more granular in how you select attributes including the values of attributes.

div[title="dna"]

The above selects all divs with an exact title of “dna”. A title of “dna is awesome” or “dnamutation” wouldn’t be selected, though there are selector algorithms that handle each of those cases (and more). We’ll get to those soon.

Note: Quotation marks are not required in attribute selectors in most cases, but I will use them because I believe it increases clarity and ensures edge cases work appropriately.

If you wanted to select “dna” out of a space separated list like “my beautiful dna” or “mutating dna is fun!” you can add a tilde or “squiggly,” as I like to call it, in front of the equal sign.

div[title~="dna"]

You can select titles such as “dontblamemeblamemydna” or “his-stupidity-is-from-upbringing-not-dna” then you can use the dollar sign $ to match the end of a title.

[title$="dna"]

To match the front of an attribute value such as titles of “dnamutants” or “dna-splicing-for-all” use a caret.

[title^="dna"]

While having an exact match is helpful it might be too tight of a selection, and the caret front match might be too wide for your needs. For instance, you might not want to select a title of “genealogy”, but still select both “gene” and “gene-data”. The exclamation point or “bang,” as I like to call it, is just that, it does an exact match, but includes when the exact match is followed by a dash.

[title!="gene"]

To be clear, though this construction often means “not equal” in many programming languages, in CSS attribute selectors it is an exact match plus an exact match at the beginning of the value followed by a dash.

Lastly, there’s a full search attribute operator that will match any substring.

[title*="dna"]

But use it wisely as the above will match “I-like-my-dna-like-my-meat-rare” as well as “edna”, “kidnapping”, and “echidnas” (something Edna really shouldn’t do.)

What makes these attribute selectors even more powerful is that they’re stackable — allowing you to select elements with multiple matching factors.

But you need to find the a tag that has a title and has a class ending in “genes”? Here’s how:

a[title][class$="genes"]

Not only can you select the attributes of an HTML element you can also print their mutated genes using pseudo-“science” (meaning pseudo-elements and the content declaration).

<span class="joke" title="Gene Editing!">What’s the first thing a biotech journalist does after finishing the first draft of an article?</span>
.joke:hover:after {
   content: "Answer:" attr(title);
   display: block;
}

The code above will show the answer to one of the worst jokes ever written on hover (yes, I wrote it myself, and, yes, calling it a “joke” is being generous).

The last thing to know is that you can add a flag to make the attribute searches case insensitive. You add an i before the closing square bracket.

[title*="DNA" i]

And thus it would match “dna”, “DNA”, “dnA”, and any other variation.

The only downside to this is that the i only works in Firefox, Chrome, Safari, Opera and a smattering of mobile browsers.

Now that we’ve seen how to select with attribute selectors, let’s look at some use cases. I’ve divided them into two categories: General Uses and Diagnostics.

General Uses

Style By Input Type

You can style input types differently, e.g. email vs. phone.

input[type="email"] {
   color: papayawhip;
}
input[type="tel"] {
   color: thistle;
}

Display Telephone Links

You can hide a phone number at certain sizes and display a phone link instead to make calling easier on a phone.

span.phone {
   display: none;
}
a[href^="tel"] {
   display: block;
}

Internal vs. External Links, Secure vs. Insecure

You can treat internal and external links differently and style secure links differently from insecure links.

a[href^="http"]{
   color: bisque;
}
a:not([href^="http"]) {
  color: darksalmon;
}
 
a[href^="http://"]:after {
   content: url(unlock-icon.svg);
}
a[href^="https://"]:after {
   content: url(lock-icon.svg);
}

Download Icons

One attribute HTML5 gave us was “download” which tells the browser to, you guessed it, download that file rather than trying to open it. This is useful for PDFs and DOCs you want people to access but don’t want them to open right now. It also makes the workflow for downloading lots of files in a row easier. The downside to the download attribute is that there’s no default visual that distinguishes it from a more traditional link. Often this is what you want, but when it’s not, you can do something like the below.

a[download]:after { 
   content: url(download-arrow.svg);
}

You could also communicate file types with different icons like PDF vs. DOCX vs. ODF, and so on.

a[href$="pdf"]:after {
   content: url(pdf-icon.svg);
}
a[href$="docx"]:after {
   content: url(docx-icon.svg);
}
a[href$="odf"]:after {
   content: url(open-office-icon.svg);
}

You could also make sure that those icons were only on downloadable links by stacking the attribute selector.

a[download][href$="pdf"]:after {
   content: url(pdf-icon.svg);
}

Override Or Reapply Obsolete/Deprecated Code

We’ve all come across old sites that have outdated code, but sometimes updating the code isn’t worth the time it’d take to do it on six thousand pages. You might need to override or even reapply a style implemented as an attribute before HTML5.

Old, holey genes
div[bgcolor="#000000"] { /*override*/ background-color: #222222 !important; } div[color="#FFFFFF"] { /*reapply*/ color: #FFFFFF; }

Override Specific Inline Styles

Sometimes you’ll come across inline styles that are gumming up the works, but they’re coming from code outside your control. It should be said if you can change them that would be ideal, but if you can’t, here’s a workaround.

Note: This works best if you know the exact property and value you want to override, and if you want it overridden wherever it appears.

For this example, the element’s margin is set in pixels, but it needs to be expanded and set in ems so that the element can re-adjust properly if the user changes the default font size.

<div style="color: #222222; margin: 8px; background-color: #EFEFEF;"Teenage Mutant Ninja Myrtle</div>
 
div[style*="margin: 8px"] {
   margin: 1em !important;
}

Note: This approach should be used with extreme caution as if you ever need to override this style you’ll fall into an !important war and kittens will die.

Showing File Types

The list of acceptable files for a file input is invisible by default. Typically, we’d use a pseudo element for exposing them and, though you can’t do pseudo elements on most input tags (or at all in Firefox or Edge), you can use them on file inputs.

<input type="file" accept="pdf,doc,docx">
 
[accept] {
   content: "Acceptable file types: " attr(accept);
}

HTML Accordion Menu

The not-well-publicized details and summary tag duo are a way to do expandable/accordion menus with just HTML. Details wrap both the summary tag and the content you want to display when the accordion is open. Clicking on the summary expands the detail tag and adds an open attribute. The open attribute makes it easy to style an open details tag differently from a closed details tag.

<details>
  <summary>List of Genes</summary>
    Roddenberry
    Hackman
    Wilder
    Kelly
    Luen Yang
    Simmons
</details>
details[open] {
   background-color: hotpink;
}

Printing Links

Showing URLs in print styles led me down this road to understanding attribute selectors. You should know how to construct it yourself now. You simply select all a tags with an href, add a pseudo-element, and print them using attr() and content.

a[href]:after {
   content: " (" attr(href) ") ";
}

Custom Tooltips

Creating custom tooltips is fun and easy with attribute selectors (okay, fun if you’re a nerd like me, but easy either way).

Note: This code should get you in the general vicinity, but may need some tweaks to the spacing, padding, and color scheme depending on your site’s environment and whether you have better taste than me or not.

[title] {
  position: relative;
  display: block;
}
[title]:hover:after {
  content: attr(title);
  color: hotpink;
  background-color: slateblue;
  display: block;
  padding: .225em .35em;
  position: absolute;
  right: -5px;
  bottom: -5px;
}

AccessKeys

One of the great things about the web is that it provides many different options for accessing information. One rarely used attribute is the ability to set an accesskey so that that item can be accessed directly through a key combination and the letter set by accesskey (the exact key combination depends on the browser). But there’s no easy way to know what keys have been set on a website.

The following code will show those keys on :focus. I don’t use on hover because most of the time people who need the accesskey are those who have trouble using a mouse. You can add that as a second option, but be sure it isn’t the only option.

a[accesskey]:focus:after {
   content: " AccessKey: " attr(accesskey);
}

Diagnostics

These options are for helping you identify issues either during the build process or locally while trying to fix issues. Putting these on your production site will make errors stick out to your users.

Audio Without Controls

I don’t use the audio tag too often, but when I do use it, I often forget to include the controls attribute. The result: nothing is shown. If you’re working in Firefox, this code can help you suss out if you’ve got an audio element hiding or if syntax or some other issue is preventing it from appearing (it only works in Firefox).

audio:not([controls]) {
  width: 100px;
  height: 20px;
  background-color: chartreuse;
  display: block;
}

No Alt Text

Images without alt text are a logistics and accessibility nightmare. They’re hard to find by just looking at the page, but if you add this they’ll pop right out.

Note: I use outline instead of border because borders could add to the element’s width and mess up the layout. outline does not add width.

img:not([alt]) { /* no alt attribute */ 
  outline: 2em solid chartreuse; 
}
img[alt=""] { /* alt attribute is blank */ 
  outline: 2em solid cadetblue; 
}

Asynchronous Javascript Files

Web pages can be a conglomerate of content management systems and plugins and frameworks and code that Ted (sitting three cubicles over) wrote on vacation because the site was down and he fears your boss. Figuring out what JavaScript loads asynchronously and what doesn’t can help you focus on where to enhance page performance.

script[src]:not([async]) {
  display: block;
  width: 100%;
  height: 1em;
  background-color: red;
}
script:after {
  content: attr(src);
}

Javascript Event Elements

You can also highlight elements that have a JavaScript event attribute to refactor them into your JavaScript file. I’ve focused on the OnMouseOver attribute here, but it works for any of the JavaScript event attributes.

[OnMouseOver] {
   color: burlywood;
}
[OnMouseOver]:after {
   content: "JS: " attr(OnMouseOver);
}

Hidden Items

If you need to see where your hidden elements or hidden inputs live you can show them with:

[hidden], [type="hidden"] {
  display: block;
}

But with all these amazing capabilities you think there must be a catch. Surely attribute selectors must only work while flagged in Chrome or in the nightly builds of Fiery Foxes on the Edge of a Safari. This is just too good to be true. And, unfortunately, there is a catch.

If you want to work with attribute selectors in that most beloved of browsers — that is, IE6 — you won’t be able to. (It’s okay; let the tears fall. No judgments.) Pretty much everywhere else you’re good to go. Attribute selectors are part of the CSS 2.1 spec and thus have been in browsers for the better part of a decade.

And so these selectors should no longer be magical to you but revealed as a sufficiently advanced technology. They are more science than magic, and now that you know their deepest secrets, it’s up to you. Go forth and work mystifying wonders of science upon the web.

Smashing Editorial
(dm, ra, yk, il)

Source: Smashing Magazine, Splicing HTML’s DNA With CSS Attribute Selectors

Collective #461

dreamt up by webguru in Uncategorized | Comments Off on Collective #461



C461_Galio

Galio

A new framework for rapidly building mobile apps that comes with many ready-to-use features and components.

Check it out



















Collective #461 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #461

Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL

Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL

Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL

Anselm Hannemann



With the latest studies and official reports out this week, it seems that to avoid an irreversible climate change on Planet Earth, we need to act drastically within the next ten years. This rose a couple of doubts and assumptions that I find worth writing about.

One of the arguments I hear often is that we as individuals cannot make an impact and that climate change is “the big companies’ fault”. However, we as the consumers are the ones who make the decisions what we buy and from whom, whose products we use and which ones we avoid. And by choosing wisely, we can make a change. By talking to other people around you, by convincing your company owner to switch to renewable energy, for example, we can transform our society and economy to a more sustainable one that doesn’t harm the planet as much. It will be a hard task, of course, but we can’t deny our individual responsibility.

Maybe we should take this as an occasion to rethink how much we really need. Maybe going out into nature helps us reconnect with our environment. Maybe building something from hand and with slow methods, trying to understand the materials and their properties, helps us grasp how valuable the resources we currently have are — and what we would lose if we don’t care about our planet now.

News

  • Chrome 70 is here with Desktop Progressive Web Apps on Windows and Linux, public key credentials in the Credential Management API, and named Workers.
  • Postgres 11 is out and brings more robustness and performance for partitioning, enhanced capabilities for query parallelism, Just-in-Time (JIT) compilation for expressions, and a couple of other useful and convenient changes.
  • As the new macOS Mojave and iOS 12 are out now, Safari 12 is as well. What’s new in this version? A built-in password generator, a 3D and AR model viewer, icons in tabs, web pages on the latest watch OS, new form field attribute values, the Fullscreen API for iOS on iPads, font collection support in WOFF2, the font-display loading CSS property, Intelligent Tracking Prevention 2.0, and a couple of security enhancements.
  • Google’s decision to force users to log into their Google account in the browser to be able to access services like Gmail caused a lot of discussions. Due to the negative feedback, Google promptly announced changes for v70. Nevertheless, this clearly shows the interests of the company and in which direction they’re pushing the app. This is unfortunate as Chrome and the people working on that project shaped the web a lot in the past years and brought the ecosystem “web” to an entirely new level.
  • Microsoft Edge 18 is out and brings along the Web Authentication API, new autoplay policies, Service Worker updates, as well as CSS masking, background blend, and overscroll.

General

  • Max Böck wrote about the Hurricane Web and what we can do to keep users up-to-date even when bandwidth and battery are limited. Interestingly, CNN and NPR provided text-only pages during Hurricane Florence to serve low traffic that doesn’t drain batteries. It would be amazing if we could move the default websites towards these goals — saving power and bandwidth — to improve not only performance and load times but also help the environment and make users happier.

UI/UX

Redesign portfolio website
Shawn Parks shares the lessons he learned from redesigning his portfolio every year. (Image credit)

Accessibility

Tooling

Privacy

  • Guess what? Our simple privacy-enhancing tools that delete cookies are useless as this article shows. There are smarter ways to track a user via TLS session tracking, and we don’t have much power to do anything against it. So be aware that someone might be able to track you regardless of how many countermeasures you have enabled in your browser.
  • Josh Clark’s comment on university research about Google’s data collection is highlighting the most important parts about how important Android phone data is to Google’s business model and what type of information they collect even when your smartphone is idle and not moving location.

Security

End-to-End Integrity with IPFS illustrated with cats and dogs
Cloudflare’s IPFS gateway allows a website to be end-to-end secure while maintaining the performance and reliability benefits of being served from their edge network. (Image credit)

Web Performance

Illustration of the RAIL model
The four parts of the RAIL performance model: Response, Animation, Idle, Load. (Image credit)

HTML & SVG

JavaScript

  • Willian Martins shares the secrets of JavaScript’s bind() function, a widely unknown operator that is so powerful and allows us to invoke this from somewhere else into named, non-anonymous functions. A different way to write JavaScript.
  • Everyone knows what the “9am rush hour” means. Paul Lewis uses the term to rethink how we build for the web and why we should try to avoid traffic jams on the main thread of the browser and outsource everything that doesn’t belong to the UI into separate traffic lanes instead.

CSS

An item placed inside a grid using negative grid lines
Did you know you can use negative grid line numbers to position Grid items with CSS? (Image credit)

Work & Life

Going Beyond…

  • In the Netherlands, there’s now a legal basis that prescribes CO2 emissions to be cut by 25% by 2020 (that’s just a bit more than one year from now). I love the idea and hope other countries will be inspired by it — Germany, for example, which currently moves its emission cut goals farther and farther into the future.
  • David Wolpert explains why computers use so much energy and how we could make them vastly more efficient. But for that to happen, we need to understand the thermodynamics of computing better.
  • Turning down twenty billion dollars is cool. Of course, it is. But the interesting point in this article about the Whatsapp founder who just told the world how unhappy he is having sold his service to Facebook is that it seems that he believed he could keep the control over his product.

One more thing: I’m very grateful for all of you who helped raise my funding level for the Web Development Reading List to 100% this month. I never got so much feedback from you and so much support. Thank you! Have a great month!

—Anselm

Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 10/2018: The Hurricane Web, End-To-End-Integrity, And RAIL

Collective #460

dreamt up by webguru in Uncategorized | Comments Off on Collective #460





C460_neggridlines

Negative Grid Lines

Learn about negative grid lines and how they can be useful in building a CSS Grid powered layout. An article by Michelle Barker.

Read it














C460_omi

Omi

A next generation web framework in just 4KB of JavaScript that combines JSX, Web Components, Proxy and Path Updating.

Check it out



C460_Colorblindly

Colorblindly

An accessibility tool to help developers understand how colorblind users experience their website.

Check it out


Collective #460 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #460