Collective #516

Inspirational Website of the Week: PICTURESTART An amazing design with a very creative navigation and spot on typography. Our pick this week. Get inspired Subgrid Learn all about the new subgrid value of CSS Grid in this MDN guide. Read it Our Sponsor Divi: The Powerful Read more

Exploring The Latest Web Design Trends Together With Be Theme

dreamt up by webguru in Uncategorized | Comments Off on Exploring The Latest Web Design Trends Together With Be Theme

Exploring The Latest Web Design Trends Together With Be Theme

Exploring The Latest Web Design Trends Together With Be Theme

Nick Babich



(This is a sponsored article.) Designers have a strange relationship with trends. On the one hand, when designers follow a crowd, they might feel that they aren’t able to express enough creativity. On the other hand, trends can tell designers a lot about user preferences — what people love, what they hate — and ultimately help designers to create products with better adoption rates.

People are visual creatures, and visual design has a significant impact on the way we understand products. In this article, I want to focus on the most crucial web design trends and illustrate each trend using Be Theme, a responsive multipurpose WordPress theme.

Let’s get started.

1. Digital Illustrations

Digital illustrations have become one of the most important trends in visual design. Relevant illustrations can make your design stand out from a crowd and establish a truly emotional connection with visitors. Illustrations are quite a versatile tool; product designers can use digital illustrations for various purposes: for hero sections, for feature descriptions, or even as a subtle icon in the navigation bar.

Two types of illustrations are popular among digital designers: hand-drawn flat illustrations and three-dimensional ones. Flat hand-drawn ones give an impression of fine craftsmanship, of a hand-made design; it’s relatively easy to see the personal style of the illustrator through their work. Slack, Intercom and Dropbox are just a few companies that use flat hand-drawn illustrations.


Hand-drawn illustrations look and feel personal for users
Hand-drawn illustrations look and feel personal for users. (Image source: themes.muffingroup) (Large preview)

Three-dimensional illustrations are quite a new trend. Designers started using them to add more realism, blurring the boundary between the digital and physical worlds.


3D illustrations give users the impression that they can almost reach out and touch objects in the scene
3D illustrations give users the impression that they can almost reach out and touch objects in the scene. (Image source: themes.muffingroup) (Large preview)

2. Vibrant Colors

There is a reason why so many digital product designers strive to use vibrant colors: Vibrant colors give visual interest to a layout. User attention is a precious resource, and one of the most effective ways to grab attention is by using colors that stand out. Bright colors used for the background can capture the visitor’s attention and contribute to a truly memorable experience.


Vivid colors are an excellent way to grab the visitor’s attention
Vivid colors are an excellent way to grab the visitor’s attention. (Image source: themes.muffingroup) (Large preview)

3. Hero Video Headers

“Show, don’t tell” is a foundational principle of good product design. Imagery plays a key role in visual design because it helps the designers to deliver the main idea quickly.

For a long time, web designers have had to use static imagery to convey their main idea. But the situation has changed. High-speed connections make it much easier for web designers to turn their home pages into immersive movie-style experiences. Video engages users, and users are more willing to spend time watching clips. Video clips used in a hero section can vary from a few seconds of looped video to full-length preview clips with audio.

Designers use video to tell stories. (Image source: themes.muffingroup) (Large preview)

4. Split Screen

Split screen is a relatively simple design technique. All you need to do to create one is divide the screen into two parts (usually 50/50) and use each part to deliver a distinct message. This technique translates well on mobile; two horizontal panels of content can be collapsed into vertical content blocks on small screens. The technique works well when you need to deliver two separate messages, as shown below.


Split screen is an excellent choice for e-commerce websites that offer products for both women and men
Split screen is an excellent choice for e-commerce websites that offer products for both women and men. (Image source: themes.muffingroup) (Large preview)

It also works well when you have to pair a text message with relevant imagery:


Split screen can be used to connect a text message with relevant imagery
Split screen can be used to connect a text message with relevant imagery. (Image source: themes.muffingroup) (Large preview)

5. Geometric Patterns

Designers can use geometric shapes and patterns endlessly to create beautiful ornaments. This technique works equally well for digital products. Designers can use SVG images and high-resolution PNGs with geometric patterns as backgrounds. Such backgrounds scale well, so you won’t have to worry about how they will look on small and large displays.


With geometric patterns, you can let your creativity run wild
With geometric patterns, you can let your creativity run wild. (Image source: themes.muffingroup) (Large preview)

6. Gradients and Duotones

Gradients are the multipurpose tool that works in pretty much any type of design. Designers often use gradients to give their work a little more depth. Modern graphic design trends dictate the use of big, bold and colorful gradients, which help designers make a statement.

When it comes to gradients, designers have a lot of creative freedom. They can experiment with various colors and types, using radial gradient, linear gradients, etc. For example, this is what happens when you put a linear one-color gradient overlay on a photo:


One-color gradient overlay on a photo
One-color gradient overlay on a photo (Image source: themes.muffingroup) (Large preview)

And this is how a radial two-color gradient looks on a plain background:


Two-color gradient over a plain background
Two-color gradient over a plain background. (Image source: themes.muffingroup) (Large preview)

The duotone effect was made popular by Spotify, the online music-streaming service. The service was searching for a bold identity for its brand and decided to use duotones in its design.

In the simplest terms, duotones are filters that replace the whites and blacks in a photo with two colors. Duotones can make almost any image match your company’s branding; simply use your brand’s primary color as the duotone filter.


A duotone in the hero image
A duotone in the hero image (Image source: themes.muffingroup) (Large preview)

7. Bold Typography

Most designers know that content should always come first in the design process. A design should honor the message that the product’s creators want to deliver to their users. Bold typography helps designers to achieve that. Massive, screen-dominating text puts the written content center stage.

Bold fonts serve a functional purpose — they make it easy to read the text. Consider the following example. This template is an excellent example of how powerful a bold font can be:


Designers can use bold typography to make text the focal point in a graphic
Designers can use bold typography to make text the focal point in a graphic. (Image source: themes.muffingroup) (Large preview)

Conclusion

“Should I follow the trends?” As a designer, you have to answer that for yourself. But if you want to see how each trend works for your project, you can do it right now. All of the Be Theme examples listed above can serve as excellent starting points for your creative journey.

Smashing Editorial
(ms, ra, yk, il)

Source: Smashing Magazine, Exploring The Latest Web Design Trends Together With Be Theme

Creating A Spotify-Powered App Using Nuxt.js

dreamt up by webguru in Uncategorized | Comments Off on Creating A Spotify-Powered App Using Nuxt.js

Creating A Spotify-Powered App Using Nuxt.js

Creating A Spotify-Powered App Using Nuxt.js

Cher Scarlett



We’ve all heard of Spotify. Launched back in 2008, the app offers millions of tracks from various legendary and upcoming artists. It allows you to create a playlist, follow other people or choose a playlist based on your mood.

But let’s take the app from another perspective today. Let’s build a two-page server-side rendered web application featuring a “Now Playing on Spotify” component. I’ll walk you through all of the steps of building a client-side application, building and connecting to a server API, as well as connecting to external API services.

Our project will be built using the Node.js and npm ecosystems, Github to store our code, Heroku as our host, Heroku’s Redis for our storage, and Spotify’s web API. The application and internal API will be build entirely using Nuxt’s system. Nuxt is a server-side-rendering framework that runs on Vuejs, Expressjs, Webpack, and Babeljs.

This tutorial is moderately complex, but is broken down into very consumable sections. You’ll find a working demo at cherislistening.heroku.com.


An image depicting the final product of the tutorial. Contains the information for the song “Break up with your girlfriend, I’m bored” by Ariana Grande with a link to the song on Spotify, and a photo of the album cover
Final result created with Nuxt.js, Redis, and Spotify. (Large preview)

Requirements

This tutorial requires knowledge of HTML, CSS, Javascript (ES6), and how to use command line or terminal. We’ll be working with Node.js and Vuejs; a basic understanding of both will be helpful before starting this tutorial. You’ll also need to have Xcode Tools installed if you are on MacOS.

If you prefer to reverse-engineer, you can fork the repository.

Table Of Contents

  1. Planning Our Application
    We’ll lay out our expected functionality and a visual representation of what we plan to see when we are finished.
  2. Setting Up And Creating Our Project
    We’ll walk through how to setup an application hosted on Heroku’s server, setup auto-deployment from Github, setup Nuxt using the command line tools, and get our local server running.
  3. Building Our API Layer
    We’ll learn how to add an API layer to our Nuxt application, how to connect to Redis, and Spotify’s web API.
  4. Client-Side Storage And State Management
    We’ll look at how we can leverage the built-in Vuex store to keep what’s playing up to date. We’ll set up our initial data connections our API.
  5. Building The Pages And Components
    We’ll take a brief look into how pages and components differ in Nuxt, and build two pages and a couple of components. We’ll use our data to build our Now Playing app and some animations.
  6. Publishing Our Application
    We’ll get our app onto GitHub and built on Heroku’s server, authenticate and share with everyone what music we’re listening to.

Planning Our Application

The most important step before we start any new project is to plan our goals. This will help us establish a set of requirements for accomplishing our goals.

  • How many pages are there?
  • What do we want on our pages?
  • Do we want our Spotify “Now Playing” component present on both of our pages?
  • Do we want a progress bar to show listeners where we are in the song?
  • How do we want our pages laid out?

These are the types of questions that will help us draft our requirements.

Let’s build out two pages for our application. First, we want a landing page with our “Now Playing” component. Our second page will be our authentication area where we connect our data to Spotify. Our design is going to be very minimalistic, to keep things simple.

For our “Now Playing” component, let’s plan on showing the progress of the track as a bar, the name of the track, the artist’s name, and the album art. We’ll also want to show an alternate state showing the most recent track played, in case we aren’t currently listening to anything.

Since we are dealing with Spotify’s API, we’ll have special tokens for accessing the data from our site. For security purposes, we don’t want to expose these tokens on the browser. We also only want our data, so we’ll want to ensure that we are the only user who can login to Spotify.

The first issue we find in planning is that we have to login to Spotify. This is where our Redis cache storage comes in. Spotify’s API will allow to permanently connect your Spotify account to an application with another special token. Redis is a highly performant in-memory data structure server. Since we are dealing with a token, a simple key:value storage system works well. We want it to be fast so we can retrieve it while our application is still loading.

Heroku has its own Redis cache service built in, so by using Heroku for our server, host, and storage, we can manage everything in one place. With the added benefit of auto-deployment, we can do everything from our console with commands in terminal. Heroku will detect our application language from our push, and will build and deploy it without much configuration.

Setting Up And Creating Our Project

Install Nodejs

Grab the right package for your OS here: https://nodejs.org/en/download/

$ node --version
 v10.0.1

Install git

Follow the instructions for your OS here: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

$ git --version
 git version 2.14.3 (Apple Git-98)

Sign Up For GitHub

Follow the instructions here: https://github.com/join and https://help.github.com/articles/set-up-git/.

Create a repository: https://help.github.com/articles/create-a-repo/

Clone the repository: https://help.github.com/articles/cloning-a-repository/

I named mine “cherislistening”. Here’s what my clone looks like:

$ git clone https://github.com/cherscarlett/cherislistening.git
Cloning into `cherislistening`...
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (4/4), done.
remove: Total 4 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (4/4), done.

$ cd cherislistening/

Install And Setup Heroku

Sign up for Heroku here: https://signup.heroku.com/

Download and install the Command Line Interface (CLI): https://devcenter.heroku.com/articles/heroku-cli#download-and-install

We’ll need to login and create our app, along with setting up some configuration variables. I named my app “cherislistening”. You can also leave off the -a command and Heroku will give you a randomly generated name. You can always change it later. The url of your app will be http://<APPLICATION_NAME>.herokuapp.com.

Nuxt requires some specific configuration to build and run properly, so we’ll add those now to get them out of the way.

$ heroku –version
 heroku/7.19.4 darwin-x64 node-v11.3.0

​$ heroku login heroku: Press any key to open up the browser to login or q to exit: Logging in… done Logged in as cher.scarlett@gmail.com

$ heroku create -a cherislistening

​$ heroku config:set CLIENT_URL=http://cherislistening.herokuapp.com API_URL=/ HOST=0.0.0.0 NODE_ENV=production NPM_CONFIG_PRODUCTION=false Setting CLIENT_URL, API_URL, HOST, NODE_ENV, NPM_CONFIG_PRODUCTION and restarting ⬢ cherislistening… done, v1 API_URL: / CLIENT_URL: http://cherislistening.herokuapp.com HOST: 0.0.0.0 NODE_ENV: production NPM_CONFIG_PRODUCTION: false

Go to the Heroku dashboard and click into your newly created app. In the ‘Deploy’ tab, connect to your Github account, select the repository you cloned, and enable auto deploys from the Master branch.


Github selected on a screenshot of the dashboard area of Heroku
Github selected in the Heroku dashboard (Large preview)

Repository selection in Github on a screenshot of the dashboard area of Heroku
Repository selection in Github (Large preview)

Automatic deployment setup with Github on a screenshot of the dashboard area of Heroku
Automatic deployment setup with Github (Large preview)

Create Nuxt App

We’ll use npx to create our Nuxt application. Npm is a great ecosystem for managing Node.js packages, but to run a package, we must install it and add it to our package.json file. That isn’t very useful if we want to execute a single package one time, and installing something isn’t really necessary. This makes npx suitable for executing packages that compose file trees, adding boilerplates, and install the packages you need during execution.

$ npx --version
 6.4.1

npx is shipped by default in npm 5.2.0+, so it is highly recommended we upgrade npm instead of globally installing npx. If you just installed a fresh version of node.js, you should have current npm and npx.

The Nuxt.js team has created a scaffolding tool which will give your application the basic structure required to run. Make sure you’re in your new project’s folder before running the command.

$ npx create-nuxt-app
 npx: installed 407 in 5.865s
 > Generating Nuxt.js project in /Users/cstewart/Projects/personal/tutorials/cherislistening
 ? Project name cherislistening
 ? Project description A Spotify Now Playing App
 ? Use a custom server framework none
 ? Choose features to install Prettier, Axios
 ? Use a custom UI framework none
 ? Use a custom test framework none
 ? Choose rendering mode Universal
 ? Author name Cher Scarlett
 ? Choose a package manager npm

npm notice created a lockfile as package-lock.json. You should commit this file.

To get started:

npm run dev

To build and start for production:

npm run build
npm start

npm notice created a lockfile as package-lock.json. You should commit this file.

To get started:

npm run dev

To build & start for production:

npm run build
npm start

Every folder within the scaffolding comes with a README file. This file will give you the basics for how the folder works, and whether or not it’s needed. We will talk about the folders we’ll use as we get to them in the tutorial.

.nuxt/
assets/
|___README.md
components/
|___Logo.vue
|___README.md
layouts/
|___default.vue
|___README.md
middleware/
|___README.md
node_modules/
pages/
|___index.vue
|___README.md
plugins/
|___README.md
static/
|___favicon.co
|___README.md
store/
|___README.md
.gitignore
.prettierrc
LICENSE
nuxt.config.js
package-lock.json
package.json
README.md

We’ll need to make a change to package.json so that when we deploy to Heroku, our build process will run. In “scripts”, we’ll add "heroku-postbuild": "npm run build". Don’t forget to add a comma after the previous line in the object.

"scripts": {
     "dev": "nuxt",
     "build": "nuxt build",
     "start": "nuxt start",
     "generate": "nuxt generate",
     "heroku-postbuild": "npm run build"
   },

package.json

If you run npm run dev, and go to http://localhost:3000 in your browser, you should see the scaffolded app running:


A screenshot of the initial Nuxt application. The initial application contains the title of the application, Nuxt’s animated logo, and links to Nuxt’s documentation
Initial state of Nuxt application after scaffolding (Large preview)

Install Redis

Open a new terminal or command line tab and change directories (cd) into your project’s parent folder. Download redis and run make. If you’re on Windows, you’ll need to check out https://github.com/MicrosoftArchive/redis/releases.

$ cd ../
$ wget http://download.redis.io/releases/redis-5.0.3.tar.gz
$ tar xzf redis-5.0.3.tar.gz
$ cd redis-5.0.3
$ sudo make install
cd src && /Library/Developer/CommandLineTools/usr/bin/make install

Hint: It is a good idea to run ‘make test’. 😉

INSTALL install
INSTALL install
INSTALL install
INSTALL install
INSTALL install

$ redis-server --version
Redis server v=5.0.3 sha=00000000:0 malloc=libc bits=64 build=bfca7c83d5814ae0

$ redis-server --daemonize yes

That will start our redis server as a background process and we can close this tab. The local redis server will be running at http://127.0.0.1:6379/.

In our tab with our project running, type Ctrl + C to kill the server. We’ll need to install a redis package for node, and provision our Heroku Redis instance.

$ npm install async-redis --save
npm WARN eslint-config-prettier@3.6.0 requires a peer of eslint@>=3.14.1 but none is installed. You must install peer dependencies yourself.

+ async-redis@1.1.5
added 5 packages from 5 contributors and audited 14978 packages in 7.954s
found 0 vulnerabilities

$ heroku addons:create heroku-redis
Creating heroku-redis on ⬢ cherislistening... free
Your add-on should be available in a few minutes.
! WARNING: Data stored in hobby plans on Heroku Redis are not persisted.
redis-metric-84005 is being created in the background. The app will restart when complete...
Use heroku addons:info redis-metric-84005 to check creation progress
Use heroku addons:docs heroku-redis to view documentation

Because we are using a hobby account, we don’t have a backup of our data. If our instance needs rebooted, we’ll need to re-authenticate to get a new key. Our application will also sleep on the free account, so some initial visits will be a little slow, while the app is “waking up”.

Our new app will be live at https://cherislistening.herokuapp.com/, where ‘cherislistening’ is whatever you named your Heroku application.


A screenshot of the initial Heroku hosted application. The initial parked page says “Welcome to your new app!” and contains a link to Heroku’s documentation
Default parked page for our new Heroku app (Large preview)

Sign Up For A Spotify Developer Account

This requires a Spotify account. Make note that every use of Spotify’s API must adhere to their brand guidelines.

Create a Client ID at https://developer.spotify.com/dashboard/applications.

Take the Client ID and the Client Secret, which you can find if you click on the green card into your new application’s details, and export them to Heroku as configuration variables. Keep these safe and secret! If you believe your client secret has been exposed, you can get a new one, but you’ll need to update your application’s configuration as well.


A screenshot of the client ID and client secret tokens in Spotify’s dashboard - the client ID and client secret have been blurred out
Never share your client ID and client secret tokens! (Large preview)
$ heroku config:set CLIENT_ID=<CLIENT_ID> CLIENT_SECRET=<CLIENT_SECRET>
Setting CLIENT_ID, CLIENT_SECRET and restarting ⬢ cherislistening... done, v3
CLIENT_ID:             <CLIENT_ID>
CLIENT_SECRET:         <CLIENT_SECRET>

On the top right side of the application dashboard, there is a Settings button. Click that and add in two callback URLs for whitelisting. You’ll need a local callback URL and one for your production server (the Heroku URL we got during setup).


A screenshot of edit settings in Spotify’s dashboard. Under redirect URIs, two entries have been added: ‘http://localhost:3000/api/callback’ and ‘http://cherislistening.herokuapp.com/api/callback’.
Whitelisted callback URLs in Spotify’s dashboard (Large preview)

Spotify has fantastic developer documentation, including a great reference interface for testing endpoints. We’ll need to get our user ID to save to our configuration variables, so let’s do that with Get Current User’s Profile. Get an auth token from their console, selecting the user-read-private scope. Click “Try It”, and in the right-hand column look for your ID. We’ll use this identifier to make sure no one else can sign into our app.

$ heroku config:set SPOTIFY_USER_ID=<SPOTIFY_USER_ID>
 Setting SPOTIFY_USER_ID and restarting ⬢ cherislistening... done, v4
 SPOTIFY_USER_ID:             <SPOTIFY_USER_ID>

As we discussed, we will have data we wouldn’t want exposed to the public. Two of these are clientId and clientSecret we were given by Spotify, and another which Heroku exported for us to access our Redis cache on the server. We’ll need to grab those for our local development as well.

$ heroku config
=== cherislistening Config Vars
API_URL:               /
CLIENT_URL:            http://cherislistening.herokuapp.com
HOST:                  0.0.0.0
NODE_ENV:              production
NPM_CONFIG_PRODUCTION: false
REDIS_URL: <REDIS_URL>
SPOTIFY_CLIENT_ID: <SPOTIFY_CLIENT_ID>
SPOTIFY_CLIENT_SECRET: <SPOTIFY_CLIENT_SECRET>
SPOTIFY_USER_ID: <SPOTIFY_USER_ID>

$ touch .env

We’ll transfer the credentials Heroku returned in our terminal to our new file, .env, and we’ll make our client URL our local server, http://localhost:3000/. We’ll need to make our Redis URL point to our local instance, too, which by default is redis://127.0.0.1:6379. This file will be ignored by git.

CLIENT_URL=http://localhost:3000/
REDIS_URL=redis://127.0.0.1:6379
SPOTIFY_CLIENT_ID=<SPOTIFY_CLIENT_ID>
SPOTIFY_CLIENT_SECRET=<SPOTIFY_CLIENT_SECRET>
SPOTIFY_USER_ID=<SPOTIFY_USER_ID>

.env

In order to access the configuration on our local server, we’ll need to update the nuxt config. We’ll add another item to our modules array: @nuxtjs/dotenv. We’ll also need to import two of the variables we will need available on the client-side of our application. We’ll add an env object following modules.

/*
  ** Nuxt.js modules
  */
  modules: [
    // Doc: https://axios.nuxtjs.org/usage
    '@nuxtjs/axios',
    '@nuxtjs/dotenv'
  ],
  env: {
    spotifyId: process.env.SPOTIFY_CLIENT_ID,
    clientUrl: process.env.CLIENT_URL
  }

nuxt.config.js

Building Our API Layer

Middleware

Nuxt has two separate methods for executing server-side code.

In a single-file component (SFC), you have access to the middleware property, which corresponds with the middleware folder in your scaffolding. The drawback with this middleware for our use-case is that while it will execute server-side when your page is loaded or refreshed, it will execute client-side once your app is mounted, and when you navigate with nuxt’s routes.

The other option is what we’re looking for. We’ll create our own directory and add it as serverMiddleware to our config. Nuxt creates its own express instance, so we can write middleware registered to its stack that will only run on the server. This way, we can protect our private data from exploitation. Let’s add an api folder and index.js to handle our API endpoints.

$ mkdir api
 $ touch api/index.js

Next, we’ll need to add our directory to our config so it registers when we start our server. Let’s open up the file nuxt.config.js at the root of our app. This file gives us our HTML <head>, as well as connecting anything to our client at build time. You can read more about the config in the docs.

We’ll add our api directory to our config file,

  },
   serverMiddleware: ['~/api']
 }

nuxt.config.js

While we are developing, our changes will require rebuilds and server reboots. Since we don’t want to have to do this manually, nuxt installs nodemon for us, which is a “hot reload” tool. This just means it will reboot the server and rebuild our app when we save our changes.

Since we’ve added our API as serverMiddleware to Nuxt’s, we’ll need to add our directory to the config. We’ll add watch to our build object, and add the relative path from root.

  */**
   *** Build configuration*
   **/*
   build: {
     watch: ['api'],
     */**
     *** You can extend webpack config here*
     **/*
     extend(config, ctx) {},
   serverMiddleware: ['~/api']
   }

nuxt.config.js

We’ll also need to change our dev script in package.json to restart the server. We’ll need to make it nodemon --watch api --exec "nuxt":

"scripts": {
     "dev": "nodemon --watch api --exec "nuxt"",
     "build": "nuxt build",
     "start": "nuxt start",
     "generate": "nuxt generate",
     "heroku-postbuild": "npm run build"
   },

package.json

Now we don’t have to worry about rebooting and restarting our server manually every time we make a change. 🎉

Let’s start our local development server.

$ npm run dev

Data Flow, Storage, And Security

Before we start writing our API layer, we’ll want to plan how we move data from external sources to our client. We’ve set up a Redis cache server, signed up for Spotify API, and set up a structure that has a client layer and a server layer. The client has pages and a store where we can store and render our data. How do these work together to keep our authentication data safe and drive our Now Playing component?


A model drawn to show how we will move our data through our application - in the center is our internal API layer - connected to it, flowing data in two ways is Spotify’s API, Redis Storage, and the client - the client is further divided into a Vue Page and the Vuex Store
Data flow plan in a chart (Large preview)

Any information we want to keep long-term, or for new incoming connections, we’ll want to store on the server. We can’t log into Spotify when other users visit our app, so we’ll need want to ensure that new client connections can bypass authentication by accessing our special service token. We’ll want to keep track of our own Spotify login so that only our own connection is approved by the API, and we’ll want a track ready to show in case we can’t connect to Spotify’s API for some reason.

So, we’ll need to plan on storing our Spotify refresh_token, our Spotify userId, and our lastPlayedTrack in our Redis Cache.

Everything else can safely be stored in our client’s vuex store. The store and the pages (including their components) will pass data back and forth using nuxt’s architecture, and we’ll talk to the Redis cache and Spotify’s API via our own server’s API.

Writing The API

Nuxt comes with the express framework already installed, so we can import it and mount our server application on it. We’ll want to export our handler and our path, so that nuxt can handle our middleware.

import express from 'express'

 const app = express()

 module.exports = {
   path: '/api/',
   handler: app
 }

api/index.js

We’ll need a few endpoints and functions to handle the services we need:

  • POST to our Redis Cache
  • Spotify last played track
  • Name
  • Artists
  • Album Cover Asset URL
  • Spotify refresh_token
  • Spotify access_token
  • Status of Spotify Connection
  • GET from our Redis Cache
  • Same as POST
  • Callback from Spotify
  • Refresh our Spotify access_token
  • GET recently played tracks from Spotify
  • GET currently playing track from Spotify

This may seem like a lot of calls, but we will combine and add small bits of logic where it makes sense as we write.

The Basics Of Writing An Endpoint In Expressjs

We’ll use express’s get() method to define most of our endpoints. If we need to send complex data to our API, we can use the post() method.

But what if we could do both? We can accept multiple methods with all().

Let’s add the first route we’ll need, which is our connection to our Redis Cache. We’ll name it spotify/data. The reason we’re naming it based on spotify rather than redis is because we’re handling information from Spotify, and Redis is simply a service we’re using to handle the data. spotify is more descriptive here, so we know what we’re getting, even if our storage service changes at some point.

For now, we’ll add only a res.send():

import express from 'express'

 const app = express()

 app.all('/spotify/data/:key', (req, res) => {
   res.send('Success! 🎉n')
 })

 module.exports = {
   path: '/api/',
   handler: app
 }

api/index.js

Let’s test to make sure everything is working properly. Open a new tab in your terminal or command line, to ensure your nuxt server continues running and run the following cURL command:

$ curl http://localhost:3000/api/spotify/data/key
 Success! 🎉

As you can see, res.send() returned the message we included in response to our GET request. This is how we will return the data we retrieve from Spotify and Redis to the client, too.

Each one of our endpoints will have the same basic structure as our first one.


The image depicts the endpoint path as spotify/data, the param as /:key, request as req, and response as res
Breakdown of an API endpoint function in Expressjs (Large preview)

It will have a path, /spotify/data/, it may have a param, like :key, and on request, express will return a request object, req, and a response object, res. req will have the data we send with to the server, res is waiting to handle what we want to do after we complete any procedures within our function.

Connecting To The Redis Cache

We’ve already seen that we can return data back to our client with res.send(), but we may also want to send a res.status(). When we have an issue reaching Spotify (or our Redis cache), we’ll want to know so we can gracefully handle the error, instead of crashing our server, or crashing the client. We’ll also want to log it, so we can be informed about failures on applications we build and service.

Before we can continue with this endpoint, we’ll need access to our Redis Cache. During setup, we installed async-redis, which will help us easily access our cache from Heroku. We’ll also need to add our dotenv config so we can access our redis URL.

import redis from 'async-redis'

require('dotenv').config()

// Redis
function connectToRedis() {
  const redisClient = redis.createClient(process.env.REDIS_URL)
  redisClient.on('connect', () => {
    console.log('n🎉 Redis client connected 🎉n')
  })
  redisClient.on('error', err => {
    console.error(`n🚨 Redis client could not connect: ${err} 🚨n`)
  })
  return redisClient
}

api/index.js

By default, redis.createClient() will use host 127.0.0.1 and port 6379, but because our production redis instance is at a different host, we will grab the one we put in our config.

We should add some console commands on the connect and error listeners the redisClient provides us. It’s always good to add logging, especially during development, so if we get stuck and something isn’t working, we’ve got a lot of information to tell us what’s wrong.

We need to handle the following cases in our API layer:

  • POST to our Redis Cache
  • Spotify lastPlayedTrack
  • Title
  • Artist
  • Album Cover Asset URL
  • Spotify vrefresh_token
  • Spotify access_token
  • GET from our Redis Cache
  • Same as POST
async function callStorage(method, ...args) {
   const redisClient = connectToRedis()
   const response = await redisClient[method](...args)
   redisClient.quit()
   return response
 }

api/index.js

Since we are requesting data from an external resource, we’ll want to use async/await to let our program know that this endpoint contains a function that always returns a promise, and that we’ll need wait for it to be returned before continuing.

In our arguments, we pull out our required, known argument method, and assign the rest (...) of the parameters to the scoped const args.

We make a call to our redis client using bracket notation, allowing us to pass a variable as the method. We again use the spread operator, ... to expand our args Array into a list of arguments with the remaining items. A call to http://localhost:3000/api/spotify/data/test?value=1 would result in a call to the redis client of redisClient['set']('test', 1). Calling redisClient['set']() is exactly the same as calling redisClient.set().

Make a note that we must quit() to close our redis connection each time we open it.

function storageArgs(key, ...{ expires, body, ...props }) {
   const value = Boolean(body) ? JSON.stringify(body) : props.value
   return [
     Boolean(value) ? 'set' : 'get',
     key,
     value,
     Boolean(expires) ? 'EX' : null,
     expires
   ].filter(arg => Boolean(arg))
 }

api/index.js

We know that we can get two types of inputs: either a JSON body or a string value. All we really need to do is check if body exists, and we’ll assume it’s JSON and stringify it. Otherwise, we’ll use props.value. If it’s empty, it will be null. We’ll assign whichever we get back from the ternary statement to the const value. Make note that we are not destructuring value from the rest (...) of props because we need to assign body to value if it exists.

The first index of the array we are returning, position 0, will be the method we call on the redis client. We’re making a Boolean check in case something other than null is passed, like undefined. If there is a value, this will return true and our method will be set. If false, get.

Index 1 and index 2 are our key and value, respectively.

The 3rd and 4th positions are used to set an expiration date on the key. This comes in handy for our access_token, which will expire every few minutes to protect the integrity of our application.

As you may have suspected, we don’t want a null or undefined value in our array, so if there is no value, we’ll want to remove it. There are several ways to handle this, but the most readable is to use Array’s method filter(). This creates a new Array, removing any items that don’t match our condition. Using a Boolean() type coercion, we can check for a true or false. A null or undefined argument in our array will be removed, leaving us with an array of arguments we can trust to return back to the caller.

const app = express()
 app.use(express.json())
 // Express app
 app.all('/spotify/data/:key', async ({ params: { key } }, res) => {
   try {
     if (key === ('refresh_token' || 'access_token'))
       throw { error: '🔒 Cannot get protected stores. 🔒' }

     const reply = await callStorage(...storageArgs(key))

     res.send({ [key]: reply })
   } catch (err) {
     console.error(`n🚨 There was an error at /api/spotify/data: ${err} 🚨n`)
     res.send(err)
   }
 })

api/index.js

Make note of app.use(express.json()). This gives us access to body on the request object. We’ll also be wrapping our endpoint procedures in try/catch blocks so we don’t wind up with uncaught errors. There are other ways to handle errors, but this is the simplest for our application.

Note: Check out this awesome demo of different errors by Wes Bos on Error Handling in Nodejs with async/await.

We want to make sure that this endpoint doesn’t return any of the data we’re trying to hide, so after we grab our key by destructuring the request object, we’ll throw an error letting the client know they can’t get those stores. Make note that when we know the structure of an incoming Object’s structure in JavaScript ES6, we can use curly braces to pull out variable names using the Object’s keys.

const reply = await callStorage(...storageArgs(key))

api/index.js

We’re calling the function named callStorage. Because we may have 3 or 4 arguments, we’re passing in rest parameters using a spread of our args Array. In the call, above, we use ... to expand an Array into our list of arguments of an unknown size, which are built from the function StorageArgs().

res.send({ [key]: reply })
   } catch (err) {
     console.error(`n🚨 There was an error at /api/spotify/data: ${err} 🚨n`)
     res.send(err)
   }
 })

api/index.js

Now that we have our reply from the redis client, we can send it to the client via the response Object’s method send(). If we POSTed to our cache, we’ll get a 1 back from the server if it’s a new key and 0 if we replaced an existing key. (We’ll want to make a mental note of that for later.) If there’s an error, we’ll catch it, log it, and send it to the client.

We’re ready to call the redis client and begin setting and getting our data.


The image maps a test parameter in the API endpoint to the key argument in the Redis client call function, and a query of value set to 1 mapped to the second argument of value
Mapping an endpoint to a Redis server call (Large preview)

Now let’s send a few test cURLs to our API endpoint in our command line or Terminal:

$ curl --request POST http://localhost:3000/api/spotify/data/test?value=Hello
{"test": 1}

$ curl http://localhost:3000/api/spotify/data/test
{"test": "Hello"}

$ curl --request POST 
  http://localhost:3000/api/spotify/data/bestSong 
  --header 'Content-Type: application/json' 
  --data '{
"name": "Break up with ur gf, I''’m bored",
"artist": "Ariana Grande"
}'
{"bestSong": 1}

$ curl http://localhost:3000/api/spotify/data/bestSong
{"bestSong":"{"name":"Break up with ur gf, I’m bored","artist":"Ariana Grande"}"}
Connecting With Spotify

Our remaining to-do list has shrunk considerably:

  • Callback from Spotify
  • Refresh our Spotify access_token
  • GET recently played track from Spotify
  • GET currently playing track from Spotify

A callback is a function that must be executed following the completion of a prior function. When we make calls to Spotify’s API, they will “call us back”, and if something isn’t quite right, Spotify’s server will deny us access to the data we requested.

import axios from 'axios'

api/index.js

Our callback will need to do a couple of things. First, it will capture a response from Spotify that will contain a code we need temporarily. Then, we’ll need to make another call to Spotify to get our refresh_token, which you may recognize from our redis storage planning. This token will give us a permanent connection to Spotify’s API as long as we are on the same application logged in as the same user. We’ll also need to check for our userId for a match before we do anything else, to prevent other users from changing our data to their own. Once we confirm we are the logged in user, we can save our refresh_token and access_token to our redis cache. Because we’re making API calls in our callback function, we’ll need to import axios to make requests, which nuxt installed when we scaffolded the app.

Note that JavaScript has a native fetch() method, but it’s very common to see axios used instead, because the syntax is more user-friendly and legible.

const getSpotifyToken = (props = {}) =>
  axios({
    method: 'post',
    url: 'https://accounts.spotify.com/api/token',
    params: {
      client_id: process.env.SPOTIFY_CLIENT_ID,
      client_secret: process.env.SPOTIFY_CLIENT_SECRET,
      redirect_uri: `${process.env.CLIENT_URL}/api/spotify/callback`,
      ...props
    },
    headers: {
      'Content-Type': 'application/x-www-form-urlencoded'
    }
  })

api/index.js

One of the benefits to using a function expression instead of an arrow function expression is that you have access to an inherit object called arguments which is mapped by index, you also get access to a contextual this object. While we don’t need access to a lexical this, since we’re only returning the response of our redisClient call, we can omit closures here and implicitly return the response of the call.

We’ll want to write a single function for getting Spotify tokens. The majority of the code for getting our refresh_token and access_token is basically the same, so we can write an axios POST boilerplate, and spread (...) a props Object. Spreading an Object expands its properties into the context parent Object at the root depth, so if we spread { grant_type: 'refresh_token' }, our params Object will be extended to contain the properties of {client_id, client_secret, redirect_url, grant_type }. Again, we forego a return with an arrow function and opt for an implicit return since this function only returns a single response.

Note that we set props in the arguments as an empty Object ({}) by default just in case this function gets called without an argument. This way, nothing should break.

const spotifyBaseUrl = 'https://api.spotify.com/v1/'

 const getUserData = access_token =>
   axios.get(`${spotifyBaseUrl}me`, {
     headers: {
       withCredentials: true,
       Authorization: `Bearer ${access_token}`
     }
   })

api/index.js

In order to check to see we are the user who logged in via Spotify, we’ll write another implicitly returned arrow function expression and call Spotify’s Get Current User’s Profile method (the one we tested earlier to get our SPOTIFY_USER_ID). We set a const here with the base API URL because we will use it again in our other calls to the library. Should this ever change in the future (like for Version 2), we’ll only have to update it in once.

We now have all of the functions we need to write our callback endpoint. Make note of the fact that this will be a client-facing endpoint.

app.get('/spotify/callback', async ({ query: { code } }, res) => {
  try {
    const { data } = await getSpotifyToken({
      code,
      grant_type: 'authorization_code'
    })
    const { access_token, refresh_token, expires_in } = data
    const {
        data: { id }
    } = await getUserData(access_token)

    if (id !== process.env.SPOTIFY_USER_ID)
      throw { error: "🤖 You aren’t the droid we’re looking for. 🤖" }


    callStorage(...storageArgs({ key: 'is_connected', value: true }))
    callStorage(...storageArgs({ key: 'refresh_token', value: refresh_token }))
    callStorage(
      ...storageArgs({
        key: 'access_token',
        value: access_token,
        expires: expires_in
      })
    )

    const success = { success: '🎉 Welcome Back 🎉' }
    res.redirect(`/auth?message=${success}`)
  } catch (err) {
    console.error(
      `n🚨 There was an error at /api/spotify/callback: ${err} 🚨n`
    )
    res.redirect(`/auth?message=${err}`)
  }

api/index.js

Our callback endpoint must match the URL we added to our settings in the Spotify dashboard exactly. We used /api/spotify/callback, so we’ll get at /spotify/callback here. This is another async function, and we need to destructure code from the request object.

We call the function we wrote earlier, getSpotifyToken(), to get our first access_token, our refresh_token, and our first expires_in. We’ll want to save all three of these to our redis cache, using redis’ set method’s built-in key timeout command to expire our access_token in expires_in seconds. This will help us set up a system of refreshing our access_token when we need it. Redis will set the access_token to null after the time to live (TTL) has reached 0 milliseconds.

Now that we have an access_token, we can make sure that the user who connected is us. We call getUserData(), the function we wrote earlier, and destructure the ID to compare to the user ID we saved to our environment configuration. If it’s not a match, we’ll throw an error message.

After we’re sure that our refresh_token is trusted, we can save our tokens to our redis cache. We call callStorage again — once for each token.

Make note that redis does have methods for setting multiple keys, but because we want to expire our access_token, we need to use set().

Since this is a client-facing endpoint, we’ll redirect to a URL and append a success or error message for the client to interpret. We’ll set this path up later on the client side.

We’ll need to retrieve our access_token and refresh it if necessary before we call any other Spotify endpoints. Let’s write an async function to handle that.

async function getAccessToken() {
  const redisClient = connectToRedis()
  const accessTokenObj = { value: await redisClient.get('access_token') }
  if (!Boolean(accessTokenObj.value)) {
    const refresh_token = await redisClient.get('refresh_token')
    const {
      data: { access_token, expires_in }
    } = await getSpotifyToken({
      refresh_token,
      grant_type: 'refresh_token'
    })
    Object.assign(accessTokenObj, {
      value: access_token,
      expires: expires_in
    })
    callStorage(...storageArgs('access_token', { ...accessTokenObj }))
  }
  redisClient.quit()
  return accessTokenObj.value
}

api/index.js

We assign a const accessTokenObj to an Object with the value of our redis get('access_token'). If the value is null, we’ll know it’s expired and we need to refresh it. After getting our refresh_token from our cache, and getting a new access_token, we’ll assign our new values to accessTokenObj, set() them in redis, and return the access_token.

Let’s get write our endpoint for getting the currently-playing track. Since we’ll only want recently-played if there’s nothing currently playing, we can write a function for our endpoint to call that handles getting that data if it’s needed.

app.get('/spotify/now-playing/', async (req, res) => {
  try {
    const access_token = await getAccessToken()
    const response = await axios.get(
      `${spotifyBaseUrl}me/player/currently-playing?market=US`,
      {
        headers: {
          withCredentials: true,
          Authorization: `Bearer ${access_token}`
        }
      }
    )
    const { data } = response
    setLastPlayed(access_token, data)
    const reply = await callStorage('get', 'last_played')
    res.send({
      item: JSON.parse(reply),
      is_playing: Boolean(data.is_playing),
      progress_ms: data.progress_ms || 0
    })
  } catch (err) {
    res.send({ error: err.message })
  }
})

async function setLastPlayed(access_token, item) {
  if (!Boolean(item)) {
    const { data } = await axios.get(
      `${spotifyBaseUrl}me/player/recently-played?market=US`,
      {
        headers: {
          withCredentials: true,
          Authorization: `Bearer ${access_token}`
        }
      }
    )
    postStoredTrack(data.items[0].track)
  } else {
    postStoredTrack(item)
  }
}


function postStoredTrack(props) {
  callStorage(
    ...storageArgs({
      key: 'last_played',
      body: props
    })
  )
}

api/index.js

The endpoint gets the Get the User’s Currently Playing Track endpoint and the async function setLastPlayed() calls the Get Current User’s Recently Played Tracks if nothing is returned from currently-playing. We’ll call our last function postStoredTrack() with whichever one we have, and retrieve it from our cache to send to the client. Note the we cannot omit the else closure because we aren’t returning anything in the if closure.

Vuex: Client-Side Storage And State Management

Now that we have middleware to connect to our services by proxy, we can connect those services to our client-side application. We’ll want our users to have automatic updates when we change songs, pause, rewind, or fast-forward, and we can handle those changes with state management.

State is our application’s way of holding onto information in real-time. It is how our application remembers the data it uses, and any changes to that data. State is really a short way of saying “the state of the system’s data”. The state of a Vue application is held in a user’s browser session, and with certain patterns, we can trigger various events to mutate that state. When the state changes, our application can update without requiring storage or server calls.

The pattern we’ll use is called a store pattern. This gives us a single source of truth as a user moves about our application (even though we’ll only have two pages for this particular app).

Vue’s component lifecycle adds the necessary one-way bindings we need, and Nuxt comes with Vuex that does all of the heavy lifting when our data changes. We will want our state to be constantly updating, but we won’t to call our API every few milliseconds to keep a progress bar moving. Instead of constantly polling our API, and reaching Spotify’s rate limit, we can lean on Vuex setters to continuously update the state of our bindings.

The data we’ll be dealing with will only be bound one-way. This means that our component and page views can get the data in store, but in order to mutate that data, they will need to call an action in the store.


A model drawn to show how data flows one way in our app - in the center is our Vuex store, which sends data our view, the pages and components - the view calls actions to mutate the model, which in turn update the Vuex store
One way data flow in our application (Large preview)

As you can see, the data only moves one way. When our application starts, we’ll instantiate our models with some default data, then we will hydrate the state in a middleware function expression built into Nuxt’s implementation of Vuex called nuxtServerInit(). After the application is running, we will periodically rehydrate the store by dispatching actions in our pages and components.

Here’s the basic structure we’ll need to activate a store in store/index.js:

// instantiated defaults on state
export const state = () => {
  property: null
}

// we don’t edit the properties directly, we call a mutation method
export const mutations = {
  mutateTheProperty (state, newProperty) {
    // we can perform logical state changes to the property here
    state.property = newProperty
  }
}

// we can dispatch actions to edit a property and return its new state
export const actions = {
  updateProperty: ({ commit, state }, newProperty) => {
   commit('mutateTheProperty', newProperty)
     return state.property // will equal newProperty and trigger subscribers to re-evaluate
   }
}

Once you feel comfortable, you can set up more shallow modular stores, which Nuxt implements based on your file structure in store/. We’ll use only the index module.

$ touch store/index.js
export const state = () => ({
  isConnected: false,
  message: null,
  nowPlaying: {},
  recentlyPlayed: {},
  trackProgress: 0,
  isPlaying: false
})

store/index.js

We’re going to need a few models to instantiate the state when our app starts. Note that this must be a function that returns an Object.

  • isConnected: tells us if we’re already connected via Spotify.
  • message: tells us if there’s an error during authentication (we set these up in the API on our callback endpoint).
  • nowPlaying: the song (track) Object that is currently or recently playing.
  • recentlyPlayed: the track most recently played.
  • trackProgress: the amount of the track that has already played (a percentage).
  • isPlaying: if the nowPlaying track is currently being played.

To update these, we’ll need to add mutations for each model. You can mutate more than one model in a mutation function, but to keep things digestible, we’re going to write a flat mutations object.

export const mutations = {
  connectionChange(state, isConnected) {
    state.isConnected = isConnected
  },
  messageChange(state, message) {
    state.message = message
  },
  nowPlayingChange(state, nowPlaying) {
    state.nowPlaying = nowPlaying
  },
  isPlayingChange(state, isPlaying) {
    state.isPlaying = isPlaying
  },
  progressChange(state, { progress, duration }) {
    state.trackProgress = (progress / duration) * 100
  },
  recentlyPlayedChange(state, recentlyPlayed) {
    state.recentlyPlayed = recentlyPlayed
  }
}

store/index.js

We’re not doing much in the way of data massaging for this app, but for progress we’ll need to calculate the percentage ourselves. We’ll return an exact number from 0-100.

export const actions = {
  async nuxtServerInit({ commit }) {
      try {
          const redisUrl = `${clientUrl}/api/spotify/data/`
                    const {
                         data: { is_connected }
                    } = await axios.get(`${redisUrl}is_connected`)

                    commit('connectionChange', is_connected)
                    if (Boolean(is_connected)) {
                         const {
                             data: { item, is_playing }
                          } = await axios.get(`${clientUrl}/api/spotify/now-playing`)

            commit('nowPlayingChange', item)
                        commit('isPlayingChange', is_playing)
                    }
  } catch (err) {
                   console.error(err)
            }
    },
  updateProgress: ({ commit, state }, props) => {
    commit('progressChange', props)
    return state.trackProgress
  },
  updateTrack: ({ commit, state }, nowPlaying) => {
    commit('nowPlayingChange', nowPlaying)
    return state.nowPlaying
  },
  updateStatus: ({ commit, state }, isPlaying) => {
    commit('isPlayingChange', isPlaying)
    return state.isPlaying
  },
  updateConnection: ({ commit, state }, isConnected) => {
    commit('connectionChange', isConnected)
    return state.isConnected
  }
}

store/index.js

nuxtServerInit() will be run when our server starts automatically, and will check if we are connected to Spotify already with a query to our redis data endpoint. If it finds that the redis cache key of is_connected is true, it will call our “now-playing” end point to hydrate nowPlaying with live data from Spotify, or whatever is already in the cache.

Our other actions take our store object and destructure commit() and state with our new data, commit() the data to the store with our mutations, and return the new state to the client.

Building The Pages And Components

Now that we have our API setup to give us data from Spotify and our store, we’re ready to build our pages and components. While we’re only going to make a couple of small pieces in this tutorial for brevity, I encourage liberal creativity.

We’ll need to remove the initial pages that the Nuxt scaffolding added, and then we’ll add our components and pages.

$ rm pages/index.vue components/Logo.vue layouts/default.vue
$ touch pages/index.vue components/NowPlaying.vue components/Progress.vue

The basic structure of every layout, page, and component in a single file component is the same. In fact, every layout, page, and component in Nuxt is a Vue component.

You can read further usage outside of the scope of this tutorial on Vue’s component registration documentation. We’re just going to do everything in the file and use plain HTML and CSS.

The repository for the demo will contain some components and styles that are not in this tutorial in order to keep things a little less complex.

<template>
  // Write plain HTML here, avoid using any logic here
  
</template> // Write plain javascript here, you can import libraries, too export default { key: 'value' } <style> // Write plain global CSS here div { display: inline; } </style>

Layout

We need to start with the default layout; this is the root of the application, where Vue will be mounted. The layout is a type of view, of which every page extends. This means that the HTML found in the layout will be the basis of all the html in every page we create.

<template>
  
Login
</template>

layouts/default.vue

In the template tag, we need a single root container, and <nuxt/> is where our application will mount.

Note: In the demo code, I’ve added a <Header/> and a <Footer/>, and the footer is a functional component because all of the data is static.

In this tutorial, I’ve added a <nuxt-link/> pointed to /auth. <nuxt-link> creates navigational links for routes within your app. I’ve added a conditional aria-current attribute to nuxt-link. By adding a colon (:) in front of the attribute, I’ve indicated to Vue that the value of the attribute is bound to some data, turning the value into JavaScript that will be interpreted as a string during the component lifecycle, depending on the condition of the expression. In a computed ternary statement, if the user on the route named auth, it will set the aria-current attribute to “page”, giving screen readers context to whether or not the user is on the path the link is pointed to. For more information on Vue’s data-binding, read this documentation.


    export default {
      titleShort: 'is Listening',
      authorName: 'Cher',
      computed: {
        ariaCurrent() {
          return 'auth' === this.$route.name ? 'page' : false
        }
      },
      head() {
        return {
          title: `${this.$options.authorName} ${
            this.$options.titleShort
          } ·X· A Musical App`,
          link: [
            {
              rel: 'stylesheet',
              href: 'https://fonts.googleapis.com/css?family=Bungee+Hairline|Oswald'
            }
          ]
        }
      }
    }

layouts/default.vue

The script tag can be thought of like a single JavaScript module. You can import other modules, and you export an Object of properties and methods. Above, we set two custom properties: titleShort and authorName. These will be mounted onto this.$options, and down the component tree you can access them through $nuxt.layout. This is useful for information you use at the root level, and in deep-nested children, like for updating the document title, or using our authorName in other contexts.


There are several functions that Vue will look for and run, like head() and computed() in the above example.

head() will modify the <head> of the HTML document. Here I’ll update the document title, and add a link.

The computed() method is for reactive data that needs to be evaluated. Whenever the shape of the data changes, it triggers a re-evaluation and a subsequent re-render of the node it is bound to.

<style>
    :root {
      --colorGray: #333642;
      --colorBlue: rgba(118, 120, 224, 0.5);
      --colorBrightBlue: rgb(0, 112, 255);
    }

    html {
      background: #000000;
    }

    body {
      padding: 0;
      margin: 0;
      color: white;
      font-family: 'Bungee Hairline', monospace;
    }

    a {
      color: white;
      text-decoration: none;
      display: inline-block;
      position: relative;
    }

    a:after,
    a:before {
      content: '';
      position: absolute;
      left: 0;
      right: 0;
      height: 1em;
      z-index: -1;
      mix-blend-mode: color-burn;
    }

    a:after {
      bottom: 2px;
      background: var(--colorBlue);
      z-index: -1;
      transform: rotate(-3deg);
    }

    a:before {
      background: rgba(118, 120, 224, 0.4);
      transform: rotate(2deg);
    }

    .nuxt-progress {
      opacity: 0.3;
      height: 2px;
      bottom: 0;
      top: auto;
    }
</style>

layouts/default.vue

In the CSS, you’ll notice I’m using a non-standard font, but no @import declaration. Since these are rendered on the server, they won’t be able to reach an external resource that isn’t in the build. We can still attach external resources — we just need to do it in a different way. There are workarounds that exist for this, but we just added it to our head(). You can also add it to nuxt.config.js.

The :root selector allows us to set global CSS variables we can use throughout the application. .nuxt-progress selector is for the progress bar that Nuxt adds during build automatically. We can style it here. I’ve just moved it to the bottom of the app and made it transparent and small.

Authentication Page

Now that we have a default layout, we can work on our authentication page. Pages are another kind of view in Nuxt, which render the HTML, CSS, and JavaScript that is needed for specific routes.

Pages and routes are automatically handled for every Vue file inside of the pages directory. You can also add more complex routing.

Everything has led us to this moment! Finally, we get to render some of our API-retrieved data!

<template>
  <transition name="fade" mode="in-out">
    <section>
      <nuxt-link
        to="/"
        name="index"
      >Close</nuxt-link>
      {{ message }}
    </section>
  </transition>
</template>

pages/auth.vue

<transition> is used to add transitions between pages and components mounting and unmounting. This will add conditional class names related to the name, and the mode “in-out” will make our transition happen both on entry and exit. For further usage, check out the documentation.

We get at data in the <template> with double curly braces {{}}. this is implied, so we don’t need to include it in the <template>.


    export default {
      asyncData({ env: {spotifyId, clientUrl}, query }) {
        const spotifyUrl = `https://accounts.spotify.com/authorize?client_id=${
          spotifyId
        }&response_type=code&scope=user-read-currently-playing,user-read-recently-played&redirect_uri=${
          clientUrl
        }/api/spotify/callback`
        return {
          spotifyUrl,
          query
        }
      },
      computed: {
        isConnected() {
          return this.$store.state.isConnected
        },
        message() {
          return this.$store.state.message
        }
      },
      mounted() {
        const { success, error } = this.query
        if (
          !Boolean(success || error) &&
          !Boolean(this.isConnected)
        ) {
          window.location = this.spotifyUrl
        } else if (Boolean(Object.keys(this.query).length !== 0)) {
          window.history.replaceState({}, document.title, window.location.pathname)
          this.$store.commit(
            'updateMessage',
            success || error
          )
          if (Boolean(success)) {
            this.$store.dispatch('updateConnection', true)
          }
        }
        if (Boolean(this.isConnected)) {
          this.$store.commit('updateMessage', "⚡ We’re Connected ⚡")
        }
      }
    }

pages/auth.vue

The first thing we need to do is redirect to the authentication server, which will call us back at our callback API proxy, and we setup to redirect us back to /auth, or this file we’re in now. To build the URL, we’ll need to get the environment variables we attached to the context object under the env parameter. This can only be done in pages. To access the context object, we’ll need to add the asyncData() method to our Object.

This function will be run before initializing the component, so make note that you do not have access to a component’s lexical this (which is always in the context of the local $nuxt Object) in this method because it does not exist yet. If there is async data required in a component, you will have to pass it down through props from the parent. There are many keys available in context, but we’ll only need env and query. We’ll return spotifyUrl and query, and they will be automatically merged with the rest of the page’s data.

There are many other lifecycle methods and properties to hook onto, but we’ll really only need mounted() and computed, data(), props, components, methods, and beforeDestroy(). mounted() ensures we have access to the window Object.

In mounted(), we can add our logic to redirect the user (well, us) to login via Spotify. Because our login page is shared with our authentication status page, we’ll check for the message Object we sent back from our callback redirect. If it exists, we will bypass redirecting so we don’t end up in an infinite loop. We’ll also check to see if we’re connected. We can set window.location to our spotifyUrl and it will redirect to the login. After logging in, and grabbing the query Object, we can remove it from our URL so our users don’t see it with window.history.replaceState({}, document.title, window.location.pathname). Let’s commit and dispatch the changes to our state in message and isConnected.

In computed(), we can return our properties from the store and they will be automatically updated on the view when they change.

Note that all properties and methods will have access to the lexical this once the component has been initialized.

<style scoped>
    section {
      position: absolute;
      width: 30%;
      min-width: 300px;
      left: 0;
      right: 0;
      bottom: 50%;
      margin: auto;
      padding: 1em;
      display: flex;
      flex-direction: column;
      text-align: center;
      justify-content: center;
      mix-blend-mode: hard-light;
      z-index: 2;
    }

    section:after,
    section:before {
      content: '';
      display: block;
      position: absolute;
      top: 0;
      bottom: 0;
      right: 0;
      left: 0;
      z-index: -1;
    }
    section:after {
      transform: rotate(1deg);
      background: rgba(255, 255, 255, 0.1);
    }
    section:before {
      transform: rotate(3deg);
      background: rgba(255, 255, 255, 0.03);
    }

    .fade-enter-active,
    .fade-leave-active {
      transition: opacity 600ms ease-out;
    }

    .fade-enter,
    .fade-leave-active {
      opacity: 0;
    }
</style>

pages/auth.vue

Note the scoped attribute added to <style>. This allows us to write shallow selectors that will only affect elements in the scope of this page (or component) by adding unique data attributes to the DOM. For more information read the documentation.

All the selectors starting with fade- are the classes created for our <transition>.

Head to http://localhost:3000/auth. If everything’s working, we should be able to login with Spotify by clicking the “Login” button, and be redirected back to see this:


A screenshot of our authentication page after we’ve been logged in. Shows our status as connected in the center of the screen
⚡ We’re connected! ⚡ (Large preview)

Let’s set up our root page.

Landing Page

This is the fun part! We’ll be creating the view that users will see when they get to our app, commonly referred to as the root or index. This is just a concise way of indicating it is the home file of its directory, and in our case, the entire application.

We’ll be adding our player directly to this page.

<template>
  <section>
    <NowPlaying v-if="isConnected && track" :nowPlaying="track" :isPlaying="isPlaying"/>
    <p v-if="!isConnected">
      😭 {{ $nuxt.layout.authorName }} hasn’t connected yet. 😭
    </p>
  </section>
</template>


import NowPlaying from '~/components/NowPlaying.vue'

export default {
  components: { NowPlaying },
  computed: {
    nowPlaying() {
      if (Boolean(Object.keys(this.$store.state.nowPlaying).length !== 0)) {
        this.$store.dispatch('updateConnection', true)
        return this.$store.state.nowPlaying
      }
      return this.$store.state.recentlyPlayed
    },
    track() {
      return this.nowPlaying
    },
    isPlaying() {
      return this.$store.state.isPlaying
    },
    isConnected() {
      return this.$store.state.isConnected
    }
  }
}


<style scoped>
section {
  min-width: 300px;
  max-width: 750px;
  margin: auto;
  padding: 1em;
}
</style>

pages/index.vue

We’ll need to import our NowPlaying component (we will write it next), and we’ll want to conditionally load it with a v-if binding based on whether or not we are connected and we have track data to show. Our computed nowPlaying() method will return the nowPlaying Object if it has properties (we instantiated an empty object in the store, so it will always exist), and we’ll dispatch an action that we’re connected. We’re passing the track and isPlaying props since they are required to show the component.

We’ll need to create our components next, otherwise this page won’t build.

Components

In Nuxt, components are partial views. They cannot be rendered on their own, and instead, can only be used to encapsulate parts of a layout or page view that should be abstracted. It’s important to note that certain methods Page views have access to, like asyncData() won’t be ever be called in a component view. Only pages have access to a server-side call while the application is starting.

Knowing when to split a chunk of a layout, page, or even component view can be difficult, but my general rule of thumb is first by the length of the file, and second by complexity. If it becomes cumbersome to understand what is going on in a certain view, it’s time to start abstracting.

We’ll split our landing page in three parts, based on complexity:

  • Index component: The page we just wrote.
  • NowPlaying component: The container and track information.
  • Progress component: The animated track progress indicator.
Now Playing
<template>
  <transition name="fade">
    <section>
      <aside>
        <img v-if="image" :src="image" alt="Album Artwork">
        <Progress :class="className" :progressPercent="progress" :image="image"/>
      </aside>
      
    </section>
  </transition>
</template>

components/NowPlaying.vue

It’s important we include a link to Spotify, as it is a part of the requirements to use their API free of charge. We’re going to pass the progress and image props to our <Progress> component.



import Progress from './Progress.vue'

export default {
  components: { Progress },
  props: ['isPlaying', 'nowPlaying'],
  data() {
    return { staleTimer: '', trackTimer: '' }
  },
  computed: {
    image() {
      const { album, image } = this.nowPlaying
      if (Boolean(album)) {
        const { url } = album.images[0]
        return url
      }
      return Boolean(image)
        ? image
        : 'https://developer.spotify.com/assets/branding-guidelines/icon2@2x.png'
    },
    progress() {
      return this.$store.state.trackProgress
    },
    artistsList() {
      const { artists } = this.nowPlaying
      return artists ? artists.map(artist => artist.name).join(', ') : null
    },
    href() {
      const { external_urls } = this.nowPlaying
      return external_urls ? external_urls.spotify : null
    },
    name() {
      return this.nowPlaying.name
    },
    status() {
      return this.isPlaying
        ? `is playing this track with ${Math.round(
            this.$store.state.trackProgress
          )}% complete`
        : 'has paused this track'
    }
  },
  created() {
    this.getNowPlaying()
    this.staleTimer = setInterval(() => {
      this.getNowPlaying()
    }, 10000)
  },
  methods: {
    updateProgress(progress = 0, duration = 0) {
      this.$store.dispatch('updateProgress', { progress, duration })
    },
    async getNowPlaying() {
      const { progress_ms, is_playing, item } = await this.$axios.$get(
        `/api/spotify/now-playing/`
      )
      if (Boolean(item)) {
        const progress = progress_ms
        const duration = item.duration_ms
        this.$store.dispatch('updateStatus', is_playing)
        clearInterval(this.trackTimer)
        if (is_playing) {
          this.timeTrack(Date.now(), duration, progress)
        } else {
          this.updateProgress(progress, duration)
        }
        let id = null
        if (Boolean(this.nowPlaying)) id = this.nowPlaying.id
        if (item && (is_playing && item.id !== id)) {
          this.$store.dispatch('updateTrack', item)
        }
      }
    },
    timeTrack(now, duration, progress) {
      const remainder = duration - progress
      const until = now + remainder
      this.trackTimer = setInterval(() => {
        const newNow = Date.now()
        if (newNow 

components/NowPlaying.vue

In addition to our computed() data, we will also have another type of reactive data on the data property. This property returns an Object with reactive properties, but these do not need to be re-evaluated. We will be using them for our timing intervals, so the updates will be come from setInterval().

created() runs when our component is done being initialized, so we’ll call our function getNowPlaying(), and start one of our two interval timers, staleTimer, which will run getNowPlaying() once every 10 seconds. You can make this shorter or longer, but keep in mind that Spotify does have rate limiting, so it shouldn’t be any less than a few seconds to avoid getting undesired API failures.

It’s important we add beforeDestroy() and clear our running intervals as a best practice.

In the methods property, we’ll have three functions: getNowPlaying(), updateProgress(), and timeTrack(). updateProgress() will dispatch progress updates to the store, while getNowPlaying() and timeTrack() will do the heavy lifting of keeping our track object hydrated and the progress bar moving every 10th of a second so we have a constantly moving progress bar.

Let’s take a closer look at getNowPlaying():

async getNowPlaying() {
  const { progress_ms, is_playing, item } = await this.$axios.$get(
    `/api/spotify/now-playing/`
  )
  if (Boolean(item)) {
    const progress = progress_ms
    const duration = item.duration_ms
    this.$store.dispatch('updateStatus', is_playing)
    clearInterval(this.trackTimer)
    if (is_playing) {
      this.timeTrack(Date.now(), duration, progress)
    } else {
      this.updateProgress(progress, duration)
    }
    const { id } = this.nowPlaying
    if (item.id !== id) {
      this.$store.dispatch('updateTrack', item)
    }
 }

components/NowPlaying.vue

This is an async function because we’re calling out now-playing endpoint, and we’ll want the function to wait until it has an answer to continue. If the item is not null or undefined, we’ll dispatch an update to the status, clearInterval() of our trackTimer (which may not be running, but that’s OK). If the is_playing is true, we’ll call timeTrack(); if it’s false, we’ll call updateProgress(). Last, we’ll check if our updated track is different than the one in our store. If it is, we’ll dispatch an update to the track in store to rehydrate our data.

timeTrack(now, duration, progress) {
  const remainder = duration - progress
  const until = now + remainder
  this.trackTimer = setInterval(() => {
    const newNow = Date.now()
    if (newNow < until + 2500) {
      const newRemainder = until - newNow
      const newProgressMs = duration - newRemainder
      this.updateProgress(newProgressMs, duration)
    } else {
      this.updateProgress(1, 1)
      clearInterval(this.trackTimer)
      this.getNowPlaying()
    }
  }, 100)
}

components/NowPlaying.vue

This function takes a current time, duration, and progress in milliseconds and starts running an interval every 100 milliseconds to update the progress. until is the time calculated when the track will be finished playing if it is not paused or scrubbed forwards or backwards. When the interval starts, we grab the current time in milliseconds with JavaScript’s Date Object’s now() method. We’ll compare the current time to see if it is less than until plus a buffer of 2500 milliseconds. The buffer is to allow for Spotify to update the data between tracks.

If we determine the track is theoretically still playing, we’ll calculate a new progress in milliseconds and call out the updateProgress() function. If we determine the track is complete, we’ll update the progress to 100%, clearInterval() and call nowPlaying() to get the next track.

<style scoped>
    section {
      position: relative;
      display: grid;
      grid-template-columns: 42% 58%;
      align-items: center;
      justify-content: center;
    }
    aside {
      position: relative;
      min-width: 50px;
    }

    img {
      opacity: 0;
      position: absolute;
      height: 0;
      width: 0;
    }

    section:after,
    section:before {
      content: '';
      display: block;
      position: absolute;
      top: 0;
      bottom: 0;
      right: 0;
      left: 0;
      z-index: 0;
    }

    section:after {
      transform: rotate(1deg);
      background: rgba(255, 255, 255, 0.1);
    }

    section:before {
      transform: rotate(3deg);
      background: rgba(255, 255, 255, 0.03);
    }
    .metadata {
      padding-left: 1.4em;
      position: relative;
      z-index: 2;
    }
    h2 {
      font-family: 'Oswald', monospace;
      margin: 0;
      font-size: 3em;
    }
    p {
      margin: 0;
      text-shadow: 1px 1px 1px rgba(0, 0, 0, 0.8);
    }

    .fade-enter-active {
      transition: opacity 600ms ease-out;
    }
    .fade-leave-active {
      opacity: 0;
    }
    .fade-enter,
    .fade-leave-to {
      opacity: 0;
    }

    .status span {
      opacity: 0.7;
      font-size: 0.8em;
      padding: 1em 0;
      display: block;
      white-space: nowrap;
    }
    .is-playing span {
      opacity: 0;
      transition: opacity 600ms ease-out;
    }

    @media (max-width: 600px) {
      section {
        grid-template-rows: 42% 58%;
        grid-template-columns: 100%;
      }
      aside {
        max-width: 160px;
        margin: 0 auto;
      }
      .metadata {
        text-align: center;
    padding: 0;
      }
    }
</style>

components/NowPlaying.vue

section is a display-type grid that keeps the album art and song metadata in two columns, and then on viewports up to 600px wide (the layout switches to two rows).

Progress

Now let’s build our Progress component. A simple solution is a bar using the width of a

:

I wanted to do something a bit different, so I’ve built a square out in SVG:

<template>
  
</template> export default { props: ['progressPercent', 'image'] } <style scoped> div { filter: grayscale(0); transform: rotate(-2deg) scale(0.9); } .is-paused { filter: grayscale(80%); transition: all 600ms ease-out; } svg { height: 100%; width: 100%; } svg.album { filter: drop-shadow(0 0 10px rgba(0, 0, 0, 0.3)); } svg.progress { position: absolute; top: 0; left: 0; right: 0; bottom: 0; filter: drop-shadow(0 0 1px rgba(255, 255, 255, 0.2)) drop-shadow(0 0 2px var(--colorBrightBlue)) drop-shadow(0 0 3px var(--colorBrightBlue)) drop-shadow(0 0 5px var(--colorBrightBlue)) opacity(65%) contrast(150%); } .bar { stroke: url(#gradient); stroke-width: 0.03em; transform: rotate(0deg); transform-origin: center; animation: fill 2s reverse; } .image { fill: url(#image); } @keyframes fill { to { stroke-dasharray: 0 100; } } </style>

components/Progress.vue

Above, we create two rect SVGs. One has a pattern fill of our image, the other is the progress bar. It’s important that whatever shape you use has a total perimeter of 100. This allows us to use the stroke-dasharray to fill the space based on a percentage. The left value is the length of the stroke, the right value is the space between the strokes. The stroke size getting larger pushes the space out of the frame and eventually is the entire length of the perimeter. We added an animation that fills the progress bar from 0 to its current point when the component is rendered.

Head to localhost:3000 and if we did everything right (and you’re playing a song) we should see something like this:


An image depicting the final product of the tutorial - contains the information for the song “Intro” by The XX with a link to the song on Spotify, and a photo of the album cover
Spotify player component finished (Large preview)

Awesome! 🙌

Publishing Our Application

Let’s get everything up into our repository!

$ git add .
$ git commit . -m 'Adds Nuxt application 🎉'
$ git push
[master b63fb2d] Adds Nuxt application 🎉.

If you look into your Heroku dashboard and look at the activity feed on the right-hand panel, there should be a build and a deployment:


A screenshot of the Heroku dashboard activity feed showing the build triggered by our auto-deployment - the commit ID from Github from the most recent merge is shown, along with links to the code diff and the build log
Deployment success! (Large preview)

If everything looks good, open your site!

$ heroku open

Log in with Spotify on production and start sharing your jam sessions!

🎉

Conclusion

Phew! We built a universal, server-side rendered application, wrote an API proxy on our server, connected to a Redis cache, and hosted on our application on Heroku. That’s pretty awesome!

Now that we know how to build an application using Nuxt, and have an understanding of what kind of data we should handle securely on the server, the possibilities for interesting applications are endless!

Build On Your Knowledge

Spotify’s API has a medley of endpoints to add more interesting experiences to the application we built, or for composing entirely new ones! You can fork my repository to explore some other components I’ve coded, or read through the docs and apply what you’ve learned to share more musical ideas!

Further Reading on SmashingMag:

Smashing Editorial
(rb, ra, il)

Source: Smashing Magazine, Creating A Spotify-Powered App Using Nuxt.js

Designing An Aspect Ratio Unit For CSS

dreamt up by webguru in Uncategorized | Comments Off on Designing An Aspect Ratio Unit For CSS

Designing An Aspect Ratio Unit For CSS

Designing An Aspect Ratio Unit For CSS

Rachel Andrew



One of the things that comes up again and again in CSS is the fact that there is no way to size things based on their aspect ratio. In particular when working with responsive designs, you often want to be able to set the width to a percentage and have the height correspond to some aspect ratio. This is something that the body responsible for designing CSS – the CSS Working Group (CSSWG) have been discussing recently and, at the recent CSSWG meeting in San Francisco a proposed solution was agreed on.

This is a new resolution so we have no browser implementations yet, but I thought it worth writing up the proposal, in case anyone in the wider web community could see a showstopping issue with it. It also gives something of an insight into the work of the CSSWG and how issues like this are discussed, and new features designed.

What Is The Problem We Are Trying To Solve?

The issue in in regard to non-replaced elements, which do not have an intrinsic aspect ratio. Replaced elements are things like images or a video placed with the <video> element. They are different to other boxes in CSS as they have set dimensions, and their own behavior. These replaced elements are said to have an intrinsic aspect ratio, due to them having dimensions.

A div or some other HTML element which may contain your content has no aspect ratio, you have to give it a width and a height. There is no way to say that you want to maintain a 16 / 9 aspect ratio, and that whatever the width is, the height should be worked out using the given aspect ratio.

A very common situation is when you want to embed an iframe in order to display a video from a video sharing site such as YouTube. If you use the <video> element then the video has an aspect ratio, just like an image. This isn’t the case if the video is elsewhere and you are using an embed. What you want is for your video to be responsive, yet remain at the correct aspect ratio for the video. What you get however, if you set width to 100%, is the need to then set a height. Your video ends up stretched or squished.

Let’s also look at a really simple case of creating a grid layout with square cells. If we were using fixed column track sizes then it is easy to get our square cells as we can define rows to be the same size as the column tracks. We could also make our row tracks auto sized, and have items and set the height on the items.

The problem comes when we want to use auto-fill, and fill a container with as many column tracks as will fit. We now can’t simply give the items a height, as we don’t know what the width is. Our items are no longer square.

Being able to size things based on their aspect ratio would mean we could calculate the correct aspect ratio once the grid item is laid out. Thus making the grid items as tall as they are wide, so that they always maintain as a square no matter what their width.

Current Aspect Ratio Solutions

Web developers have been coping with the lack of any aspect ratio in CSS in various ways, the main one being the “padding hack”. This solution uses the fact that padding % in the block direction (so top and bottom padding in a horizontal top to bottom language) is calculated from the inline size (width).

The article Aspect Ratio Boxes on CSS Tricks has a good rundown of the current methods of creating aspect ratio boxes. The padding hack works in many cases but does require a bunch of hoops to jump through in order to get it working well. It’s also not in the slightest bit intuitive – even if you know why and how it works. It’s those sort of things that we want to try and solve in the CSS Working Group. Personally I feel that the more we get elegant solutions for layout in CSS, the more the messy hacks stand out as something we should fix.

For the video situation you can use JavaScript, and a very popular solution is to use FitVids, as also described in the CSS Tricks article. Using JavaScript is a reasonable solution but it’s more script to load, and also something else to remember to do. You can’t simply plonk a video into your content and it just work.

The Proposed Solution

What we are looking for is a generic solution that will work for regular block layout, such as a video in an iframe, or a div on the page. It should also work if the item is a grid or flex item. There is a different issue of wanting grid tracks to maintain an aspect ratio (having the row tracks match the columns), this solution would fix many cases however where you might want that. You would be working from the item out rather than the track in however.

The soluion will be part of the CSS Sizing Specification, and is being written up in the CSS Sizing 4 specification. This is the first step for new features being designed by the CSS Working Group, the idea is discussed, and then written up in a specification. An initial proposal for this feature was brought to the group by Jen Simmons, and you can see her slide deck which goes through many of the use cases discussed in this article here.

The new property introduced to the Sizing specification is be the aspect-ratio property. This property will accept a value which is an aspect ratio such as 16/9. For example, if you want a square box with the width the same as the height you would use the following:

.box {
  width: 400px;
  height: auto;
  aspect-ratio: 1/1;
}

For a 16 / 9 box, such as for a video:

.box {
  width: 100%;
  height: auto;
  aspect-ratio: 16/9;
}

For the example with the square items in a grid layout, we leave our grid tracks auto sized, which means they will take their size from the items, we then make our items sized with the aspect-ratio unit.

.grid {
    display: grid;
    grid-template-columns: repeat(autofill, minmax(200px, 1fr));
}

.item {
    aspect-ratio: 1/1;
}

Features often go through various iterations before browsers start to implement them. Having discussed the need for an aspect ratio unit previously, this time we were looking at one particular concern around the proposal. What happens if you specify an aspect ratio box, but then add too much content to the box? This same issue is brought up in the CSS Tricks article about the padding hack, with equally unintuitive solutions required to fix it.

Dealing With Overflow

What we are dealing with here is overflow, as is so often the case on the web. We want to have a nice neatly sized box, our design asks for a nice neatly sized box, our content is less well behaved and turns out to be bigger than we expected and breaks out of the box. In addition to specifying how we ask for an aspect ratio in one dimension, we also have to specify what happens if there is too much content, and how the web developer can tell the browser what to do about that overflow content.

There is a general design principal in CSS that we try to avoid data loss. Data loss in a CSS context is where some of your content vanishes. That might either be because it gets poked off the side of the viewport, or is cropped when it overflows. It’s generally preferable to have a messy overflow, as you will notice it and do something about it. If we cause something to vanish, you may not realise, especially if it only happens at one breakpoint.

We have a similar issue in grid layout which is nicely fixed with the minmax() function for track sizes. You can define grid tracks with a fixed height using a length unit. This will give you a lovely neat grid of boxes, however as soon as someone adds more content than you expected into one of those boxes, you will get overflow.

The fix for this in grid layout is to use minmax() for your track size, and make the max value auto. In this case auto can be thought of as “big enough to fit the content”. What you then get is a set of neat looking boxes that, if more content that expected gets in, grow to accept that content. Infinitely better than a messy overflow or cropped content. In the example below you can see the first row with the box with extra content has grown, the second row is sized at 100 pixels.

We need something similar for our aspect ratio boxes, and the suggestion turns out to be relatively straightforward. If you do nothing about overflow, then the default behavior will be that the content is allowed to grow past the height that is inferred by the aspect ratio. This will give you the same behavior as the grid tracks sized with minmax(). In the case of a height, the height will be at least the height defined by the aspect ratio, but if the content is taller, the height can grow to fit it.

If you don’t want that, then you can change the value of overflow as you would normally do. For example hiding the overflow, or allowing it to scroll.

Commenting On Proposals In Progress

I think that this proposal covers the use cases detailed in the CSS Tricks article, and the common things that web developers want to do. It gives you a way to create aspect ratio sized boxes in various layout contexts, and is robust. It will cope with the real situation of content on the web, where we don’t always know how much content we have or how big it is.

If you spot some real problem with this, or have some other use case that you think can’t be solved, you can comment on the proposal directly by raising an issue at the CSSWG GitHub repo. If you don’t want to do that you can always comment here, or post to your own blog and link to it here so I can see it. I’d be very happy to share your thoughts with the working group as this feature is discussed.

Smashing Editorial
(il)

Source: Smashing Magazine, Designing An Aspect Ratio Unit For CSS

Can You Make More Money With A Mobile App Or A PWA?

dreamt up by webguru in Uncategorized | Comments Off on Can You Make More Money With A Mobile App Or A PWA?

Can You Make More Money With A Mobile App Or A PWA?

Can You Make More Money With A Mobile App Or A PWA?

Suzanne Scacca



Let’s be honest. The idea behind building mobile apps, websites or any other branded platforms online is to make money, right? Your clients have contacted you to do this for them in order to maximize their results and, consequently, their profits. If money didn’t matter, they’d use a free website builder tool to throw something — anything — up there and you’d no longer be part of the equation.

Money does matter, and if your clients don’t see a huge return on their investment in an app, it’s going to be quite difficult to sustain a business built around designing apps.

Today, I’m going to talk about why app monetization needs to be one of the first things you think about before making a choice between designing a mobile app or PWA for your clients. And why the smartest thing you can do right now is to steer profit-driven clients to a PWA.

Your Guide To Progressive Web Apps

There are a lot of pain points that users face when browsing old non-PWA websites. Make sure you’re familiar with important technologies that make for cool PWAs, like service workers, web push notifications and IndexedDB. Read more  →

PWA vs. Mobile App Monetization: Consider This

I’ve been watching the MoviePass app closely since it came out. Part of me wanted to hop aboard and start reaping the benefits of the too-good-to-be-true movie app’s promise, but part of me just didn’t see how that kind of business model could be viable or sustainable.

For starters, the subscription service was way underpriced. I realize the makers of the app hoped that many users wouldn’t use their subscriptions to the fullest, or at all, which would then drive profits up on their end. They had a similar suspicion regarding the amount of data they’d be able to mine from users and sell to advertisers and marketers. But, in 2019, that all seems to have been faulty logic with the app in a major downward spiral of profit loss.

It just goes to show you that no matter how popular your mobile app may be, it’s really difficult to make a profit with one. Now, “difficult” does not mean “impossible”. There are certainly mobile apps that make incredible amounts of money. But just because you can make money with a mobile app, does it mean it’s the smartest option for your client? If your client’s end users are craving a convenient, fast and intuitively designed app experience, couldn’t you give them a PWA instead?

If you look at the big picture, you’ll find that there’s a greater opportunity to make money (and, not only that, make a profit) with a PWA when compared to a mobile app.

Let’s explore how we get to that point and how you can use these calculations to determine what the best option is for your client.

  1. The Cost To Build
  2. The Cost To Maintain
  3. The Cost To Acquire Users
  4. The Cost To Monetize

#1: The Cost To Build

Building an app is no easy feat, whether it be a native mobile app or PWA. According to Savvy, there are three tiers of mobile app development options:


Savvy app cost estimates
Savvy breaks down app building costs into three categories (Source: Savvy) (Large preview)

According to Savvy, small development shops may charge up to $100,000 to build an app. App development agencies can charge up to $500,000. And those targeting enterprises may bill up to $1,000,000.

That said, PWAs aren’t cheap either.

Give Otreva’s “How Much to Build an App” calculator a try. These are the estimated costs I received (top-right corner) to build an e-commerce mobile app that’s feature-rich:


Otreva calculator mobile app calculations
Otreva calculates the cost of building an ecommerce app to be $356k. (Source: Otreva Calculator) (Large preview)

Compare that to the estimated costs to build a progressive web app with the same exact features:


Otreva calculator PWA calculations
Otreva calculates the cost of building an ecommerce app to be $346k. (Source: Otreva Calculator) (Large preview)

Although the costs here aren’t too far apart, I don’t suspect that to be the case when building less robust apps for clients. As you decrease the amount of features included, you’re likely to find that the gap between the cost of mobile apps and PWAs grows.

Even so, let’s say what you plan to build is comparable in pricing regardless of which app you choose. Keep in mind that these calculators don’t take into consideration the cost of building out the backend server environment (which is something a PWA doesn’t need). Plus, when you compare the timeline of developing a mobile app against a PWA, mobile apps will almost always take longer as you have to build an app for each of the stores you want it to appear in.

So, when you consider the upfront costs of building an app, be sure to look a bit more closely at everything involved. At some point, the revenue you generate is going to have to make up for that investment (i.e. loss).

#2: The Cost To Maintain

Software of any kind must be updated regularly — as does anything you build with it. That’s because designs go stale, security bugs require patches and performance can always be improved. But the way you manage and maintain mobile apps vs. PWAs is incredibly different.

BuildFire has a great roundup of the hidden costs that come with having a mobile app. In it, author Ian Blair shares the most expensive maintenance costs associated with apps:


Hidden mobile app costs
BuildFire estimates the most expensive mobile app hidden costs. (Source: BuildFire) (Large preview)

Some of these will certainly overlap with PWAs. However, take a look at these three that are specific to mobile apps:

  • App update submissions = $2,400
  • iOS and Android updates = $10,000
  • Servers = $12,000

That’s why you’ll find that most estimates put the cost of annual mobile app maintenance at about 20% of the original upfront cost to build it.

One thing that’s missing from this breakdown is the time-cost of maintaining a mobile app. Because not only are updates costly to manage, but they take a while to happen, too, as app stores have to review any changes to the software or content you’re attempting to push through.

PWAs are significantly easier and cheaper to maintain as they’re web-based applications. So, it’s not all that different from what you would do to keep a website up-to-date.

Yes, the surrounding web hosting architecture, SSL certificate, payment gateways and other integrated technology will require monitoring and maintenance. However, a lot of that stuff is managed by the provider itself. Most of what you have to concern yourself with in terms of maintaining a PWA is the update piece.

When the underlying software has an update available or you simply want to make a change to the content of the PWA, you can push it through to your site (after testing on a staging server first, of course). There’s no app store process you have to follow or to wait for approval from. Changes immediately go live.

Recommended reading: Native And PWA: Choices, Not Challengers!

#3: The Cost To Acquire Users

Once you have a handle on how much the app itself costs, it’s time to wrap your head around the cost of customer acquisition. This is where we’ll start to see PWAs pull far ahead of mobile apps.

For example, here are all the things you have to do in order to acquire users for a mobile app:

Get An App Store Membership

Pay the $99/year Apple Developer Program membership fee or pay the $25 one-time fee to create a Google Play Developer account. You can’t publish to the stores without them.

In-Depth Market Testing

Because a mobile app is such an expensive investment, you can’t afford to throw something into the app store without first doing in-depth audience research and beta testing.

This means looking at the current app market to see if there’s even a need or room for your mobile app. Then, study the target audience and how they’re likely to stumble upon it and convert. Once you have a good hypothesis in place, beta testing will be key to ensure you have a viable strategy in place. (It’ll also be quite expensive, too.)

Decide On A Customer Acquisition Model

Getting someone to install your app from an app store is one thing. Getting users to become an actual customer is another. If you haven’t done so already, figure out what sort of action you’ll require of them before you’re willing to call them a “customer”.

Statista’s 2017–2018 data on the average mobile app user acquisition costs might have you reconsidering your original choice though:


Mobile app customer acquisition costs
Statista presents estimates for the cost of mobile app customer acquisition. (Source: Statista) (Large preview)

Not only is there a great discrepancy between acquiring a user who’s willing to install your app and someone who’s willing to pay for a subscription, but there’s also a large discrepancy between the cost of converting Android vs. iOS users.

You might find that the monetization model you had hoped to use just won’t pay off in the end. (More on that down below.)

App Store Optimization

Publishing a mobile app to an app store isn’t enough to guarantee users will want to install it. You have to drive traffic to it within each app store.

If you don’t have a tool that’ll help you write descriptions and metadata for the listing, you’ll need to hire a copywriter or agency who can help (and they’re not cheap). Plus, don’t forget about the screenshots of the app in action. There’s still a bit of work to do before you can get that app out to the app stores for review and approval.

Build A Website

Yep, that’s right. Even though your client has spent all this money to build a mobile app, they’re still going to need a website when all is said and done. It’s not going to be a duplicate of the app store though. All they really need is a high-converting landing page that’ll rank in search, bring attention to the app and help drive engaged leads to it.

That said, websites cost money. You’ll need a domain name, web hosting, SSL certificate and perhaps a premium theme or plugin to design it.

Get Good Press

Because you can’t leverage regular ol’ search marketing to drive traffic to your app (since there’s no link to share), you have to rely on online publications and influencers to talk it up on your behalf. You should also be doing this on your own through social media. The only thing is, organic social media marketing takes time.

If you want good press for your mobile app, you’ll have to use paid social ads, search ads and affiliate relationships to help you spread the word quickly.

Retention Rate Optimization

One final customer acquisition cost to factor in is retention rate optimization. As we’ve seen before, all it takes is 30 days for a mobile app to lose up to 90% of its user base. If you’re not continually evaluating the quality of your app and refining it to create a better experience, you might as well double the cost of customer acquisition now.

Consumers, in general, aren’t as eager to spend money with new brands and definitely don’t spend as much as long-time customers do. If you don’t have a plan to develop ways to breed loyalty with current ones, your mobile app is going to bleed a lot of money along the way.

On the other hand, there’s a lot less you must do to acquire users for a progressive web app:

Search Engine Optimization

A PWA is already on the web, so there’s no need to build an additional website to promote it. All you need to worry about now is optimizing it for search. You could do this on your own with free keyword tools and SEO plugins.

However, it’s probably worth investing in an SEO pro or agency if you’re trying to get the app to the top of search ASAP.

Paid Promotions

There’s no need to go to the extent of a mobile app with press pitches, affiliate links or influencer marketing. Instead, you can use paid ads on social media and Google (all within a reasonable budget) to increase the presence of your PWA in search.

Leverage The “Add To Homescreen” Button

Unlike mobile apps which need users to find them within app stores, PWAs are searchable. However, if you’re trying to retain these users and convert them into customers, your best bet is to put the “Add to Homescreen” button in front of them like The Weather Channel does.


Weather Channel PWA add to homescreen
The Weather Channel asks visitors to add the PWA to the home screen. (Source: The Weather Channel) (Large preview)

All it takes it one click and they’ll have instant access to your PWA right from their mobile homescreen.

#4: The Cost To Monetize

That doesn’t make sense, does it? The “cost” to monetize? Sadly, it does.

Before I explain the costs, let’s discuss the kinds of monetization that are available for each.

Mobile App Monetization

Paid apps are ones that are completely gated off unless a user pays for subscription access. The New York Times does this, though it gives users a handful of articles to read for free to give them a taste of what they’re missing.


New York Times app subscription
The New York Times app is subscription only. (Source: The New York Times) (Large preview)

Freemium apps are ones that are mostly free, but ask for payment to access certain parts of the app. An example of this is Jackpot Magic Slots, which allows users to create competitive clubs like this one which requires “member” funding:


Jackpot Magic Slots club funding
Jackpot Magic Slots enables users to create clubs that receive funding. (Source: Jackpot Magic Slots) (Large preview)

The catch is that users will inevitably need to purchase coins or spend a lot of time gambling in the app in order to afford those funding fees. So, Jackpot Magic Slots is indirectly making money off of its users.

In-app purchase apps are ones that allow unfettered access to the app. However, they ask for payment from users that want to upgrade their experience with in-app currency, premium features and more. Words with Friends sells Power Ups, Premiums and Coins to customers who want to get more out of their gameplay.


Words with Friends upgrades
Words with Friends charges for in-app upgrades. (Source: Words with Friends) (Large preview)

Sponsored content apps are ones that publish sponsored ads and content to generate revenue. Facebook, of course, is a master of this seeing as how it’s nearly impossible for businesses to get in front of users otherwise:


Facebook business ads
Facebook is basically a pay-to-play platform for businesses. (Source: Facebook) (Large preview)

Ad-free apps are ones that accept payment to remove intrusive ads from getting in the way of the app interface.

eCommerce apps are ones that sell goods through their own payment gateways as Fashion Nova does:


Fashion Nova ecommerce
Fashion Nova has a mobile app store, too. (Source: Fashion Nova) (Large preview)

Free apps are just what they sound like. However, they aren’t typically available to the public at large. Instead, loyalty users, enterprise customers and others who pay for a premium service in person or online gain access for free.

There’s another way free apps make money and that’s to reward users for referring others to it as is the case with Wordscapes:


Wordscapes referral rewards
Wordscapes rewards users for inviting others to join the app. (Source: Wordscapes) (Large preview)

It might not lead directly to cash in the bank for the app, but it does increase the amount of word-of-mouth referrals which tend to be more valuable in the long run anyway.

The Cost…
As great as all these monetization methods are, there are two big things to note here in terms of what mobile app monetization is going to cost you:

Mobile app stores take a portion of money earned through your app. More specifically, app stores take 30% of your earnings.

This becomes obvious when you compare app store revenues:


Mobile app store revenue
Statista tracks mobile app store revenue trends from 2015 to 2020. (Source: Statista) (Large preview)

Against mobile app revenues:


Mobile app revenue
Statista tracks mobile app revenue trends from 2015 to 2020. (Source: Statista) (Large preview)

Note that the app store revenues shown above are about a third of total mobile app revenues. So, your earnings with a mobile app are more like 70% of your projected total earnings.

Another monetization “cost” you have to think about is the fact that app stores don’t pay you out right away.

According to Apple:

Payments are made within 45 days of the last day of the month in which book purchases were made. To receive payment, you must have provided all required banking and tax information and documentation, as well as meeting the minimum payment threshold.

Not only that, but you have to meet a certain minimum threshold. If your app doesn’t generate over a certain limit based on which country you operate out of, you might have to wait longer.

According to Google:

In many cases, Google will initiate a payment on the 15th day of each month or on the next business day, if your bank account has been verified and you’ve reached a minimum balance, which varies by region.

Google’s minimum threshold is much higher than Apple’s, so you could end up waiting even longer to get paid your app earnings.

In sum, not only are you paying the app stores a membership fee and letting them take a good chunk of your earnings, but you’re paying with your time as well.

PWA Monetization

Subscriptions: Just like mobile apps, PWAs can sell premium access. The Financial Times is an online newspaper that sells premium access to its stories through its PWA:


Financial Times PWA subscription
Financial Times has a PWA that’s subscription-only. (Source: Financial Times) (Large preview)

Freemium access: Since you’re not apt to find a lot of gaming apps as PWAs, freemium access won’t come in the form of things like in-app upgrades. Instead, you’ll see examples like The Billings Gazette which offer subscriptions for a more streamlined news-reading experience:


Billing Gazette monetizes a streamlined experience
The Billings Gazette offers survey-free articles for a subscription. (Source: The Billings Gazette) (Large preview)

Advertising: Ads have been a part of the web’s monetization model for a long time now, so it would be odd for PWAs to ignore this obvious choice. Forbes is one such example that uses a lot of advertising on its PWA:


Forbes PWA ads
Forbes makes the most of its ad space on its PWA. (Source: Forbes) (Large preview)

Affiliate marketing is another way to collect ad revenue with PWAs.

eCommerce: Traditional ecommerce sales can take place on PWAs, especially since an SSL certificate is required in order to have one. Debenhams is a nice example of a PWA that sells products online through a PWA to generate revenue.


Debenhams ecommerce PWA
Debenhams attracts mobile shoppers with its ecommerce PWA. (Source: Debenhams) (Large preview)

But that’s not all. Any kind of business can easily convert its website into a PWA and continue selling its products, services, and downloadables. eCommerce monetization is available to everyone.

The Cost…
Compared to how many ways you can earn money with a mobile app, this might seem like a tawdry list. But here’s the thing:

When you make money with a PWA, it’s yours to keep. That is, aside from any affiliate commissions or e-commerce gateway fees you may owe. But neither of those come close to the 30% take the app stores claim.

Additionally, if you’re helping your client make the move from website to PWA (which is much more seamless than website to native app), you can expect a major leap in revenue generation almost right away.

Think with Google interviewed Mobify CEO Igor Faletski to see what sort of monetization trends his company has noticed when it comes to PWAs. Here’s what he said:

Not only can a PWA provide your customers with a richer mobile experience sooner, it can deliver a faster return on investment. And that ROI can potentially offset the cost and risk of the larger digital transformation project.
Our customers typically see a 20% revenue boost with a PWA, so every minute you don’t have a PWA is a minute spent with 20% less revenue on your busiest customer touchpoint. As an example, a retailer with $20 million in annual e-commerce revenue could lose $1.4 million by waiting a month to offer a PWA and another $6.8 million by waiting for six months.


PWA possible earnings timeline
Think with Google shows how much money you can earn if you launch a PWA today. (Source: Think with Google) (Large preview)

Want to see a real-life example of how businesses are earning more money by creating a progressive web app? Check out this story about JM Bullion.

Thanks to the major increase in speed with its PWA:

JMBullions.com’s smartphone conversion rate is 28% higher this month compared with the month prior to switching over.

Wrapping Up

Before you go rushing out to build a mobile app for your clients, think about the kind of ROI they can realistically expect compared to a PWA.

The upfront costs and ongoing maintenance of a native mobile app are huge. If your client isn’t generating huge sums of money right away (or even before launch), the app itself might not even be a sustainable venture.

However, if you look at the PWA counterpart, not only is it less expensive to build and maintain, but the turnaround times ensure that cash will start to flow in sooner rather than later. Plus, since PWAs are web-based, there’s really no secret to how much work is involved in optimizing them for search or marketing them to the world.

With a shorter learning curve and lower costs, it seems odd to opt for a mobile app when a PWA can provide nearly as good of an experience.

Further Reading on SmashingMag:

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, Can You Make More Money With A Mobile App Or A PWA?

Biometrics And Neuro-Measurements For User Testing

dreamt up by webguru in Uncategorized | Comments Off on Biometrics And Neuro-Measurements For User Testing

Biometrics And Neuro-Measurements For User Testing

Biometrics And Neuro-Measurements For User Testing

Susan Weinschenk



(This article is sponsored by Adobe.) So it’s time to test the latest version of your app with users. You schedule your first user testing session. The participant enters the room; your lab partner puts velcro on the participant’s finger and fits a headband and head cap on before she sits down at a computer to start the user test session. What’s all this for? It’s biometrics and neuro-measurements.

In a “traditional” user test, you put a participant in front of your app, product, or software and give them tasks to do, ask them to “think aloud”, and observe and record what they say and what they do. You may ask them some questions before and after the session, too. I’ve done thousands of these sessions, and chances are that if you are a user researcher, you have to.


A traditional way or user testing in which a participant is seated in front of a screen and asked to say what they see and feel
The most common way of user testing: participants are seated in front of a screen and asked to say what they see and feel. (Image source: iMotions) (Large preview)

There’s nothing really wrong with user testing this way except that it relies on the participant telling you (either during or after the session) why they did what they did, and how they feel about the product or app. You can see that they clicked on a particular button or touched a link on the mobile app, but if they explain why, you are only getting the conscious reason why.


People filter their feelings, decisions and reasons consciously.

What if you could get their unconscious reactions? What if you could take a look inside your users’ brains and see what it is they aren’t saying, i.e. the things they themselves may not realize about their reactions to your product?

We know that most mental processing — including decision-making and emotional reactions — occurs unconsciously. So if people tell you how they feel and why they did something, it is possible that they believe what they are saying is the truth, but it’s also possible that they don’t know how they feel or why they did or did not take an action.

People filter their feelings, decisions and reasons consciously and by that time you aren’t necessarily getting real data. Add to that the fact that users aren’t always truthful during user tests. They may not want to offend you by telling you they think your product is hard to use or boring.

So that’s why user researchers are starting to use some other tools to get reactions and data directly from the body without the filtering of conscious thought. Hence, biometrics and neuro-measurements.

Some of these new tools are easy and inexpensive to use. Others may take more investment of your time and budget. Or you may want to bring in an outside firm that specializes in these tools. (Some suggestions for outside vendors are at the end of the article.)

Let’s take a look at what’s available.

Galvanic Skin Response (GSR)

GSR is also called “electrodermal activity” or EDA. A typical GSR measurement device is a relatively small, unobtrusive sensor that is connected to the skin of your finger or hand.

Sweat glands on the hands are very sensitive to changes in your emotional state. If you become emotionally aroused — either positively or negatively — then you will release more sweat in your hands. Sometimes, these are very small changes that you may not notice. This is what a GSR monitor is measuring.


An image of a GSR measurement device
You may not notice that there is a small amount of moisture, but even the tiniest amount of increase in moisture changes the amount of electrical conductance of your skin. (Image source: iMotions) (Large preview)

The GSR monitor can’t tell if you are happy, sad, scared, and so on, but it can tell if you are becoming more or less emotional. And since the amount of sweat you release is not under conscious control, a GSR monitor can measure what you may not be consciously aware of.

GSR monitoring has been around for over a hundred years. The monitors are relatively inexpensive and easy to learn how to use. The price for a GSR monitor ranges from about $150 to $600, depending on the brand and model you get. If you want to buy your own, check out Carolina Supply. iMotions also has a great downloadable guide to GSR monitors that you can get for free.

Recommended reading: How People Make Decisions

Respiration

It’s also relatively easy to measure respiration. When people are emotionally aroused they breathe faster. This can be detected in several ways — the easiest being to place a cloth band around the chest and/or stomach and measure the expansion of the chest or stomach as people breathe.


Belt from Biopac
A ‘respiration transducer’ helps measure any changes in the abdominal circumference that occur as a subject breathes. (Image source: iMotions) (Large preview)

If/when they are using your product and they start breathing faster, you can deduce that something has (either positively or negatively) affected them emotionally.

Heart Rate

You can also use the band around the chest or even a simpler measurement on a finger to measure heart rate/pulse. When you are emotionally aroused, your heart beats faster and your pulse increases.

How would you use GSR, respiration, or heart rate data in a user test or study? Let’s say you are testing an app for getting an insurance quote. You ask the user what they think of the insurance quote app, and they answer:

“It was OK, it wasn’t too hard to use.”

But looking at their GSR, respiration, and/or heart rate might tell you that they were stressed. The data will also show you when and where in the process they had the most stress.

Like GSR monitors, heart-rate and respiration monitors are relatively inexpensive (under $100). What you may really want, however, is a total package that includes, a universal monitor that you can plug more than one measurement into.

For example, you can use GSR, heart rate, respiration and even EEG (discussed below), plus software that lets you monitor the data and combine it with actions your users are taking at specific moments during your user study. These packages will cost you a lot, however. A whole system may run as much as $7,000.

To get started, you may want to bring in a vendor who has the equipment to get your feet wet before you decide to buy these tools for your lab.

Eye Tracking

I am probably unusual in my criticisms of eye-tracking. A lot of people like eye tracking, but I think it has some problems. I’ll explain why.

Eye tracking involves having people look at a special monitor while wearing eye-tracking headsets/glasses. The eye tracker measures what you look at and how long you look at it. If you were doing user testing on a web page, then you could see (either for an individual or through aggregated data) where people looked most, how long they looked at it, and what people did not look at, and so on.

Eye tracking works just fine in measuring what it is measuring. But here’s my criticism: Eye tracking only measures where people are looking with their central vision. It doesn’t measure peripheral vision.

Recent research on peripheral vision shows that peripheral vision is more important than once thought for information process. For example, images of danger and emotion are processed faster in peripheral vision than in central vision. We also know now that people use peripheral vision to decide if they are the right place, or in the case of software and website design, if they are at the right page or screen. It’s possible for people to “see” something in peripheral vision, but not be consciously aware that they have. And what they see can influence the action they take.

Since eye tracking doesn’t track any peripheral vision data, I am not a big fan of it. Monitors with eye tracking built in, plus the software to analyze and report on the data can cost around $7,000 to $10,000.


Eye tracking only measures where people are looking with their central and not with their peripheral vision.

Facial Coding

Cameras can capture someone’s face as they use a product or watch a video. Algorithms can then analyze the facial expressions and tell you whether the person is confused, happy, scared, and so on.


Facial coding used on a scene of a movie played on a screen
Facial coding uses algorithms to take a good guess at what the person is feeling. (Image source: iMotions) (Large preview)

Facial coding is also an “add-on” feature to eye tracking. You should assume similar pricing ($7,000 to $10,000) for facial coding as for eye tracking

fEMG

EMG stands for Electromyography, or muscle movement. Whenever a muscle contracts it generates a small amount of electricity which can be detected with some fairly simple electrodes. Muscle movement can be very small — you may not see the muscle move, but you can measure it.

This means that some of the most interesting EMG measurements come from the movement of muscles in the face or fEMG. Facial coding uses algorithms to take a good guess at what the person is feeling, but with fEMG you can actually measure the muscles in the face and thereby more accurately assess the emotion that the person is feeling. There is muscle activity in the face that a video won’t detect, but that the fEMG recordings will detect. This means that with fEMG you can pick up on emotions that are not being obviously displayed through just facial coding.


An image of a testing person using a fEMG measurement device placed to his forehead that records facial muscle movement
(Image source: iMotions) (Large preview)

When would you use facial coding or fEMG?

Well, let’s say you have created some new videos for the careers/employment page of your company’s website. The videos have real people who work at the company talking about how they came to be an employee, and what it is they like about working at the company. You want to know if people like and resonate with the videos. Facial coding and, even better, fEMG, would help you measure what people are feeling, and even tell you which parts of the video are eliciting which emotions.

fEMG equipment and software are expensive and not easy to learn how to use. For this reason, you will probably want to start by bringing in a vendor rather than using this on your own.

EEG (Electroencephalography)

You can directly measure the electrical activity of the brain by placing electrodes on the scalp. EEG devices measure the electrical activity generated by neurons.

EEG measures electrical changes on the surface of the brain — not deep within particular brain structures. This means that EEG can’t tell you that a particular part of the brain is active. It can only tell you when there is more or less brain activity. You would need to use more sophisticated methods, such as fMRI (functional Magnetic Resonance Imaging) to study more specific brain activity. fMRI equipment is very large and very expensive, which is why only research and medical institutions use them. In contrast, EEG is inexpensive.

EEG measures whether a person is engaged and paying attention. EEG measurements are particularly good at showing you activity by seconds or even parts of a second. Let’s go back to the example of the user test to measure the impact of the employee story videos at the careers/jobs page of the corporate website. Are the videos interesting? Do people pay attention while watching them? Exactly which parts of the videos are engaging? EEG can tell you this.

When I was in graduate school and doing EEG research, we had to use electrodes and gel to get EEG readings, but now there are easier ways. You can place a cap on someone’s head, kind of like a swim cap, and the electrodes are built in to the cap.


An EEG measurement device designed similar to a swim cap
(Image source: iMotions) (Large preview)

Some devices are like headsets rather than swim caps:


An EEG measurement device shaped like a headset
(Image source: Spark Neuro) (Large preview)

EEG devices range from the inexpensive to the expensive. For example, Emotiv makes a $299 EEG headset. You will probably, however, want to get a higher end version for $799, and then you will need a subscription for the software ($99 a month).

It can take a while to learn how to accurately read EEG data, so, again, it might be better to start by bringing in a vendor who has all the equipment and know-how until you learn.

Recommended reading: Grabbing Visual Attention With The Visual Cortex

Combining Measurements

It is common to combine multiple methods of biometrics together to help with the accuracy and interpretation of the results.

Although biometrics and neuro-measurements don’t tell the whole story, the data that we get from biometrics and neuro-measurements is more accurate than self-reporting. As the tools become easier to use and researchers get used to using them, they will become more common. We may even get to the point where we stop using the think-aloud technique altogether, although I don’t think we are there yet!

Takeaways

  • If you haven’t already researched biometrics for your user testing projects, now is a good time to check out these measurements as an addition to your current testing.
  • Pick a modality and/or a vendor and do a trial project.
  • If you are in charge of user-testing budgets, add in some biometrics to your budgeting process for the next year or two so you can get started.

Vendors

Vendors to consider for a biometric study:

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(cm, ms, ra, il)

Source: Smashing Magazine, Biometrics And Neuro-Measurements For User Testing

Biometrics And Neuro-Measurements For User Testing

dreamt up by webguru in Uncategorized | Comments Off on Biometrics And Neuro-Measurements For User Testing

Biometrics And Neuro-Measurements For User Testing

Biometrics And Neuro-Measurements For User Testing

Susan Weinschenk



(This article is sponsored by Adobe.) So it’s time to test the latest version of your app with users. You schedule your first user testing session. The participant enters the room; your lab partner puts velcro on the participant’s finger and fits a headband and head cap on before she sits down at a computer to start the user test session. What’s all this for? It’s biometrics and neuro-measurements.

In a “traditional” user test, you put a participant in front of your app, product, or software and give them tasks to do, ask them to “think aloud”, and observe and record what they say and what they do. You may ask them some questions before and after the session, too. I’ve done thousands of these sessions, and chances are that if you are a user researcher, you have to.


A traditional way or user testing in which a participant is seated in front of a screen and asked to say what they see and feel
The most common way of user testing: participants are seated in front of a screen and asked to say what they see and feel. (Image source: iMotions) (Large preview)

There’s nothing really wrong with user testing this way except that it relies on the participant telling you (either during or after the session) why they did what they did, and how they feel about the product or app. You can see that they clicked on a particular button or touched a link on the mobile app, but if they explain why, you are only getting the conscious reason why.


People filter their feelings, decisions and reasons consciously.

What if you could get their unconscious reactions? What if you could take a look inside your users’ brains and see what it is they aren’t saying, i.e. the things they themselves may not realize about their reactions to your product?

We know that most mental processing — including decision-making and emotional reactions — occurs unconsciously. So if people tell you how they feel and why they did something, it is possible that they believe what they are saying is the truth, but it’s also possible that they don’t know how they feel or why they did or did not take an action.

People filter their feelings, decisions and reasons consciously and by that time you aren’t necessarily getting real data. Add to that the fact that users aren’t always truthful during user tests. They may not want to offend you by telling you they think your product is hard to use or boring.

So that’s why user researchers are starting to use some other tools to get reactions and data directly from the body without the filtering of conscious thought. Hence, biometrics and neuro-measurements.

Some of these new tools are easy and inexpensive to use. Others may take more investment of your time and budget. Or you may want to bring in an outside firm that specializes in these tools. (Some suggestions for outside vendors are at the end of the article.)

Let’s take a look at what’s available.

Galvanic Skin Response (GSR)

GSR is also called “electrodermal activity” or EDA. A typical GSR measurement device is a relatively small, unobtrusive sensor that is connected to the skin of your finger or hand.

Sweat glands on the hands are very sensitive to changes in your emotional state. If you become emotionally aroused — either positively or negatively — then you will release more sweat in your hands. Sometimes, these are very small changes that you may not notice. This is what a GSR monitor is measuring.


An image of a GSR measurement device
You may not notice that there is a small amount of moisture, but even the tiniest amount of increase in moisture changes the amount of electrical conductance of your skin. (Image source: iMotions) (Large preview)

The GSR monitor can’t tell if you are happy, sad, scared, and so on, but it can tell if you are becoming more or less emotional. And since the amount of sweat you release is not under conscious control, a GSR monitor can measure what you may not be consciously aware of.

GSR monitoring has been around for over a hundred years. The monitors are relatively inexpensive and easy to learn how to use. The price for a GSR monitor ranges from about $150 to $600, depending on the brand and model you get. If you want to buy your own, check out Carolina Supply. iMotions also has a great downloadable guide to GSR monitors that you can get for free.

Recommended reading: How People Make Decisions

Respiration

It’s also relatively easy to measure respiration. When people are emotionally aroused they breathe faster. This can be detected in several ways — the easiest being to place a cloth band around the chest and/or stomach and measure the expansion of the chest or stomach as people breathe.


Belt from Biopac
A ‘respiration transducer’ helps measure any changes in the abdominal circumference that occur as a subject breathes. (Image source: iMotions) (Large preview)

If/when they are using your product and they start breathing faster, you can deduce that something has (either positively or negatively) affected them emotionally.

Heart Rate

You can also use the band around the chest or even a simpler measurement on a finger to measure heart rate/pulse. When you are emotionally aroused, your heart beats faster and your pulse increases.

How would you use GSR, respiration, or heart rate data in a user test or study? Let’s say you are testing an app for getting an insurance quote. You ask the user what they think of the insurance quote app, and they answer:

“It was OK, it wasn’t too hard to use.”

But looking at their GSR, respiration, and/or heart rate might tell you that they were stressed. The data will also show you when and where in the process they had the most stress.

Like GSR monitors, heart-rate and respiration monitors are relatively inexpensive (under $100). What you may really want, however, is a total package that includes, a universal monitor that you can plug more than one measurement into.

For example, you can use GSR, heart rate, respiration and even EEG (discussed below), plus software that lets you monitor the data and combine it with actions your users are taking at specific moments during your user study. These packages will cost you a lot, however. A whole system may run as much as $7,000.

To get started, you may want to bring in a vendor who has the equipment to get your feet wet before you decide to buy these tools for your lab.

Eye Tracking

I am probably unusual in my criticisms of eye-tracking. A lot of people like eye tracking, but I think it has some problems. I’ll explain why.

Eye tracking involves having people look at a special monitor while wearing eye-tracking headsets/glasses. The eye tracker measures what you look at and how long you look at it. If you were doing user testing on a web page, then you could see (either for an individual or through aggregated data) where people looked most, how long they looked at it, and what people did not look at, and so on.

Eye tracking works just fine in measuring what it is measuring. But here’s my criticism: Eye tracking only measures where people are looking with their central vision. It doesn’t measure peripheral vision.

Recent research on peripheral vision shows that peripheral vision is more important than once thought for information process. For example, images of danger and emotion are processed faster in peripheral vision than in central vision. We also know now that people use peripheral vision to decide if they are the right place, or in the case of software and website design, if they are at the right page or screen. It’s possible for people to “see” something in peripheral vision, but not be consciously aware that they have. And what they see can influence the action they take.

Since eye tracking doesn’t track any peripheral vision data, I am not a big fan of it. Monitors with eye tracking built in, plus the software to analyze and report on the data can cost around $7,000 to $10,000.


Eye tracking only measures where people are looking with their central and not with their peripheral vision.

Facial Coding

Cameras can capture someone’s face as they use a product or watch a video. Algorithms can then analyze the facial expressions and tell you whether the person is confused, happy, scared, and so on.


Facial coding used on a scene of a movie played on a screen
Facial coding uses algorithms to take a good guess at what the person is feeling. (Image source: iMotions) (Large preview)

Facial coding is also an “add-on” feature to eye tracking. You should assume similar pricing ($7,000 to $10,000) for facial coding as for eye tracking

fEMG

EMG stands for Electromyography, or muscle movement. Whenever a muscle contracts it generates a small amount of electricity which can be detected with some fairly simple electrodes. Muscle movement can be very small — you may not see the muscle move, but you can measure it.

This means that some of the most interesting EMG measurements come from the movement of muscles in the face or fEMG. Facial coding uses algorithms to take a good guess at what the person is feeling, but with fEMG you can actually measure the muscles in the face and thereby more accurately assess the emotion that the person is feeling. There is muscle activity in the face that a video won’t detect, but that the fEMG recordings will detect. This means that with fEMG you can pick up on emotions that are not being obviously displayed through just facial coding.


An image of a testing person using a fEMG measurement device placed to his forehead that records facial muscle movement
(Image source: iMotions) (Large preview)

When would you use facial coding or fEMG?

Well, let’s say you have created some new videos for the careers/employment page of your company’s website. The videos have real people who work at the company talking about how they came to be an employee, and what it is they like about working at the company. You want to know if people like and resonate with the videos. Facial coding and, even better, fEMG, would help you measure what people are feeling, and even tell you which parts of the video are eliciting which emotions.

fEMG equipment and software are expensive and not easy to learn how to use. For this reason, you will probably want to start by bringing in a vendor rather than using this on your own.

EEG (Electroencephalography)

You can directly measure the electrical activity of the brain by placing electrodes on the scalp. EEG devices measure the electrical activity generated by neurons.

EEG measures electrical changes on the surface of the brain — not deep within particular brain structures. This means that EEG can’t tell you that a particular part of the brain is active. It can only tell you when there is more or less brain activity. You would need to use more sophisticated methods, such as fMRI (functional Magnetic Resonance Imaging) to study more specific brain activity. fMRI equipment is very large and very expensive, which is why only research and medical institutions use them. In contrast, EEG is inexpensive.

EEG measures whether a person is engaged and paying attention. EEG measurements are particularly good at showing you activity by seconds or even parts of a second. Let’s go back to the example of the user test to measure the impact of the employee story videos at the careers/jobs page of the corporate website. Are the videos interesting? Do people pay attention while watching them? Exactly which parts of the videos are engaging? EEG can tell you this.

When I was in graduate school and doing EEG research, we had to use electrodes and gel to get EEG readings, but now there are easier ways. You can place a cap on someone’s head, kind of like a swim cap, and the electrodes are built in to the cap.


An EEG measurement device designed similar to a swim cap
(Image source: iMotions) (Large preview)

Some devices are like headsets rather than swim caps:


An EEG measurement device shaped like a headset
(Image source: Spark Neuro) (Large preview)

EEG devices range from the inexpensive to the expensive. For example, Emotiv makes a $299 EEG headset. You will probably, however, want to get a higher end version for $799, and then you will need a subscription for the software ($99 a month).

It can take a while to learn how to accurately read EEG data, so, again, it might be better to start by bringing in a vendor who has all the equipment and know-how until you learn.

Recommended reading: Grabbing Visual Attention With The Visual Cortex

Combining Measurements

It is common to combine multiple methods of biometrics together to help with the accuracy and interpretation of the results.

Although biometrics and neuro-measurements don’t tell the whole story, the data that we get from biometrics and neuro-measurements is more accurate than self-reporting. As the tools become easier to use and researchers get used to using them, they will become more common. We may even get to the point where we stop using the think-aloud technique altogether, although I don’t think we are there yet!

Takeaways

  • If you haven’t already researched biometrics for your user testing projects, now is a good time to check out these measurements as an addition to your current testing.
  • Pick a modality and/or a vendor and do a trial project.
  • If you are in charge of user-testing budgets, add in some biometrics to your budgeting process for the next year or two so you can get started.

Vendors

Vendors to consider for a biometric study:

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(cm, ms, ra, il)

Source: Smashing Magazine, Biometrics And Neuro-Measurements For User Testing

How To Build An Endless Runner Game In Virtual Reality (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on How To Build An Endless Runner Game In Virtual Reality (Part 1)

How To Build An Endless Runner Game In Virtual Reality (Part 1)

How To Build An Endless Runner Game In Virtual Reality (Part 1)

Alvin Wan



Today, I’d like to invite you to build an endless runner VR game with webVR — a framework that gives a dual advantage: It can be played with or without a VR headset. I’ll explain the magic behind the gaze-based controls for our VR-headset players by removing the game control’s dependence on a keyboard.

In this tutorial, I’ll also show you how you can synchronize the game state between two devices which will move you one step closer to building a multiplayer game. I’ll specifically introduce more A-Frame VR concepts such as stylized low-poly entities, lights, and animation.

To get started, you will need the following:

  • Internet access (specifically to glitch.com);
  • A new Glitch project;
  • A virtual reality headset (optional, recommended). (I use Google Cardboard, which is offered at $15 a piece.)

Note: A demo of the final product can be viewed here.

Step 1: Setting Up A Basic Scene

In this step, we will set up the following scene for our game. It is composed of a few basic geometric shapes and includes custom lighting, which we will describe in more detail below. As you progress in the tutorial, you will add various animations and effects to transform these basic geometric entities into icebergs sitting in an ocean.


A preview of the game scene’s basic geometric objects
A preview of the game scene’s basic geometric objects (Large preview)

You will start by setting up a website with a single static HTML page. This allows you to code from your desktop and automatically deploy to the web. The deployed website can then be loaded on your mobile phone and placed inside a VR headset. Alternatively, the deployed website can be loaded by a standalone VR headset.

Get started by navigating to glitch.com. Then, do the following:

  1. Click on “New Project” in the top right.
  2. Click on “hello-webpage” in the drop down.

    Glitch.com’s homepage
    Glitch.com’s homepage (Large preview)
  3. Next, click on index.html in the left sidebar. We will refer to this as your “editor”.

Glitch project index.html file
Glitch project: the index.html file (Large preview)

Start by deleting all existing code in the current index.html file. Then, type in the following for a basic webVR project, using A-Frame VR. This creates an empty scene by using A-Frame’s default lighting and camera.

<!DOCTYPE html>
<html>
  <head>
    <title>Ergo | Endless Runner Game in Virtual Reality</title>
    https://aframe.io/releases/0.7.0/aframe.min.js
  </head>
  <body>
    <a-scene>
    </a-scene>
  </body>
</html>

Note: You can learn more about A-Frame VR at aframe.io.

To start, add a fog, which will obscure objects far away for us. Modify the a-scene tag on line 8.

<a-scene fog="type: linear; color: #a3d0ed; near:5; far:20">

Moving forward, all objects in the scene will be added between the <a-scene>...</a-scene> tags. The first item is the sky. Between your a-scene tags, add the a-sky entity.

<a-scene ...>
  <a-sky color="#a3d0ed"></a-sky>
</a-scene>

After your sky, add lighting to replace the default A-Frame lighting.

There are three types of lighting:

  • Ambient
    This is an ever-present light that appears to emanate from all objects in the scene. If you wanted a blue tint on all objects, resulting in blue-ish shadows, you would add a blue ambient light. For example, the objects in this Low Poly Island scene are all white. However, a blue ambient light results in a blue hue.
  • Directional
    This is analogous to a flashlight which, as the name suggests, points in a certain direction.
  • Point
    Again, as the name suggests, this emanates light from a point.

Just below your a-sky entity, add the following lights: one directional and one ambient. Both are light blue.

<!-- Lights -->
<a-light type="directional" castShadow="true" intensity="0.4" color="#D0EAF9;" position="5 3 1"></a-light>
<a-light intensity="0.8" type="ambient" color="#B4C5EC"></a-light>

Next, add a camera with a custom position to replace the default A-Frame camera. Just below your a-light entities, add the following:

<!-- Camera -->
<a-camera position="0 0 2.5"></a-camera>

Just below your a-camera entity, add several icebergs using low-poly cones.

<!-- Icebergs -->
<a-cone class="iceberg" segments-radial="5" segments-height="3" height="1" radius-top="0.15" radius-bottom="0.5" position="3 -0.1 -1.5"></a-cone>
<a-cone class="iceberg" segments-radial="7" segments-height="3" height="0.5" radius-top="0.25" radius-bottom="0.35" position="-3 -0.1 -0.5"></a-cone>
<a-cone class="iceberg" segments-radial="6" segments-height="2" height="0.5" radius-top="0.25" radius-bottom="0.25" position="-5 -0.2 -3.5"></a-cone>

Next, add an ocean, which we will temporarily represent with a box, among your icebergs. In your code, add the following after the cones from above.

<!-- Ocean -->
<a-box depth="50" width="50" height="1" color="#7AD2F7" position="0 -0.5 0"></a-box>

Next, add a platform for our endless runner game to take place on. We will represent this platform using the side of a large cone. After the box above, add the following:

<!-- Platform -->
<a-cone scale="2 2 2" shadow position="0 -3.5 -1.5" rotation="90 0 0" radius-top="1.9" radius-bottom="1.9" segments-radial="20" segments-height="20" height="20" emissive="#005DED" emissive-intensity="0.1">
  <a-entity id="tree-container" position="0 .5 -1.5" rotation="-90 0 0">
  </a-entity>
</a-cone>

Finally, add the player, which we will represent using a small glowing sphere, on the platform we just created. Between the <a-entity id="tree-container" ...></a-entity> tags, add the following:

<a-entity id="tree-container"...>
  <!-- Player -->
  <a-entity id="player" player>
    <a-sphere radius="0.05">
      <a-light type="point" intensity="0.35" color="#FF440C"></a-light>
    </a-sphere>
  </a-entity>
</a-entity>

Check that your code now matches the following, exactly. You can also view the full source code for step 1.

<!DOCTYPE html>
<html>
  <head>
    <title>Ergo | Endless Runner Game in Virtual Reality</title>
    https://aframe.io/releases/0.7.0/aframe.min.js
  </head>
  <body>
    <a-scene fog="type: linear; color: #a3d0ed; near:5; far:20">

      <a-sky color="#a3d0ed"></a-sky>

      <!-- Lights -->
      <a-light type="directional" castShadow="true" intensity="0.4" color="#D0EAF9;" position="5 3 1"></a-light>
      <a-light intensity="0.8" type="ambient" color="#B4C5EC"></a-light>

      <!-- Camera -->
      <a-camera position="0 0 2.5"></a-camera>

      <!-- Icebergs -->
      <a-cone class="iceberg" segments-radial="5" segments-height="3" height="1" radius-top="0.15" radius-bottom="0.5" position="3 -0.1 -1.5"></a-cone>
      <a-cone class="iceberg" segments-radial="7" segments-height="3" height="0.5" radius-top="0.25" radius-bottom="0.35" position="-3 -0.1 -0.5"></a-cone>
      <a-cone class="iceberg" segments-radial="6" segments-height="2" height="0.5" radius-top="0.25" radius-bottom="0.25" position="-5 -0.2 -3.5"></a-cone>

      <!-- Ocean -->
      <a-box depth="50" width="50" height="1" color="#7AD2F7" position="0 -0.5 0"></a-box>

      <!-- Platform -->
      <a-cone scale="2 2 2" shadow position="0 -3.5 -1.5" rotation="90 0 0" radius-top="1.9" radius-bottom="1.9" segments-radial="20" segments-height="20" height="20" emissive="#005DED" emissive-intensity="0.1">
        <a-entity id="tree-container" position="0 .5 -1.5" rotation="-90 0 0">
          <!-- Player -->
          <a-entity id="player" player>
            <a-sphere radius="0.05">
              <a-light type="point" intensity="0.35" color="#FF440C"></a-light>
            </a-sphere>
          </a-entity>
        </a-entity>
      </a-cone>
    </a-scene>
  </body>
</html>

To preview the webpage, click on “Preview” in the top left. We will refer to this as your preview. Note that any changes in your editor will be automatically reflected in this preview, barring bugs or unsupported browsers.


“Show Live” button in glitch project
“Show Live” button in glitch project (Large preview)

In your preview, you will see the following basic virtual reality scene. You can view this scene by using your favorite VR headset.

Animating Ocean and Fixed White Cursor
Animating Ocean and the fixed white cursor (Large preview)

This concludes the first step, setting up the game scene’s basic geometric objects. In the next step, you will add animations and use other A-Frame VR libraries for more visual effects.

Step 2: Improve Aesthetics for Virtual Reality Scene

In this step, you will add a number of aesthetic improvements to the scene:

  1. Low-poly objects
    You will substitute some of the basic geometric objects with their low-poly equivalents for more convincing, irregular geometric shapes.
  2. Animations
    You will have the player bob up and down, move the icebergs slightly, and make the ocean a moving body of water.

Your final product for this step will match the following:

Low-poly icebergs bobbing around
Low-poly icebergs bobbing around (Large preview)

To start, import A-Frame low-poly components. In <head>...</head>, add the following JavaScript import:

The A-Frame low-poly library implements a number primitives, such as lp-cone and lp-sphere, each of which is a low-poly version of an A-Frame primitive. You can learn more about A-Frame primitives over here.

Next, navigate to the <!-- Icebergs --> section of your code. Replace all <a-cone>s with <lp-cone>.

<!-- Icebergs -->
<lp-cone class="iceberg" ...></lp-cone>
<lp-cone class="iceberg" ...></lp-cone>
<lp-cone class="iceberg" ...></lp-cone>

We will now configure the low-poly primitives. All low-poly primitive supports two attributes, which control how exaggerated the low-poly stylization is:

  1. amplitude
    This is the degree of stylization. The greater this number, the more a low-poly shape can deviate from its original geometry.
  2. amplitude-variance
    This is how much stylization can vary, from vertex to vertex. The greater this number, the more variety there is in how much each vertex may deviate from its original geometry.

To get a better intuition for what these two variables mean, you can modify these two attributes in the A-Frame low-poly demo.

For the first iceberg, set amplitude-variance to 0.25. For the second iceberg, set amplitude to 0.12. For the last iceberg, set amplitude to 0.1.

<!-- Icebergs -->
<lp-cone class="iceberg" amplitude-variance="0.25" ...></lp-cone>
<lp-cone class="iceberg" amplitude="0.12" ... ></lp-cone>
<lp-cone class="iceberg" amplitude="0.1" ...></lp-cone>

To finish the icebergs, animate both position and rotation for all three icebergs. Feel free to configure these positions and rotations as desired.

The below features a sample setting:

<lp-cone class="iceberg" amplitude-variance="0.25" ...>
        <a-animation attribute="rotation" from="-5 0 0" to="5 0 0" repeat="indefinite" direction="alternate"></a-animation>
        <a-animation attribute="position" from="3 -0.2 -1.5" to="4 -0.2 -2.5" repeat="indefinite" direction="alternate" dur="12000" easing="linear"></a-animation>
      </lp-cone>
      <lp-cone class="iceberg" amplitude="0.12" ...>
        <a-animation attribute="rotation" from="0 0 -5" to="5 0 0" repeat="indefinite" direction="alternate" dur="1500"></a-animation>
        <a-animation attribute="position" from="-4 -0.2 -0.5" to="-2 -0.2 -0.5" repeat="indefinite" direction="alternate" dur="15000" easing="linear"></a-animation>
      </lp-cone>
      <lp-cone class="iceberg" amplitude="0.1" ...>
        <a-animation attribute="rotation" from="5 0 -5" to="5 0 0" repeat="indefinite" direction="alternate" dur="800"></a-animation>
        <a-animation attribute="position" from="-3 -0.2 -3.5" to="-5 -0.2 -5.5" repeat="indefinite" direction="alternate" dur="15000" easing="linear"></a-animation>
      </lp-cone>

Navigate to your preview, and you should see the low-poly icebergs bobbing around.

Bobbing player with fluctuating light
Bobbing player with fluctuating light (Large preview)

Next, update the platform and associated player. Here, upgrade the cone to a low-poly object, changing a-cone to lp-cone for <!-- Platform -->. Additionally, add configurations for amplitude.

<!-- Platform -->
<lp-cone amplitude="0.05" amplitude-variance="0.05" scale="2 2 2"...>
    ...
</lp-cone>

Next, still within the platform section, navigate to the <!-- Player --> subsection of your code. Add the following animations for position, size, and intensity.

<!-- Player -->
<a-entity id="player" ...>
  <a-sphere ...>
    <a-animation repeat="indefinite" direction="alternate" attribute="position" ease="ease-in-out" from="0 0.5 0.6" to="0 0.525 0.6"></a-animation>
    <a-animation repeat="indefinite" direction="alternate" attribute="radius" from="0.05" to="0.055" dur="1500"></a-animation>
    <a-light ...>
      <a-animation repeat="indefinite" direction="alternate-reverse" attribute="intensity" ease="ease-in-out" from="0.35" to="0.5"></a-animation>
    </a-light>
  </a-sphere>
</a-entity>

Navigate to your preview, and you will see your player bobbing up and down, with a fluctuating light on a low-poly platform.

Bobbing player with fluctuating light
Bobbing player with fluctuating light (Large preview)

Next, let’s animate the ocean. Here, you can use a lightly-modified version of Don McCurdy’s ocean. The modifications allow us to configure how large and fast the ocean’s waves move.

Create a new file via the Glitch interface, by clicking on “+ New File” on the left. Name this new file assets/ocean.js. Paste the following into your new ocean.js file:

/**
 * Flat-shaded ocean primitive.
 * https://github.com/donmccurdy/aframe-extras
 *
 * Based on a Codrops tutorial:
 * http://tympanus.net/codrops/2016/04/26/the-aviator-animating-basic-3d-scene-threejs/
 */
AFRAME.registerPrimitive('a-ocean', {
  defaultComponents: {
    ocean: {},
    rotation: {x: -90, y: 0, z: 0}
  },
  mappings: {
    width: 'ocean.width',
    depth: 'ocean.depth',
    density: 'ocean.density',
    amplitude: 'ocean.amplitude',
    'amplitude-variance': 'ocean.amplitudeVariance',
    speed: 'ocean.speed',
    'speed-variance': 'ocean.speedVariance',
    color: 'ocean.color',
    opacity: 'ocean.opacity'
  }
});

AFRAME.registerComponent('ocean', {
  schema: {
    // Dimensions of the ocean area.
    width: {default: 10, min: 0},
    depth: {default: 10, min: 0},

    // Density of waves.
    density: {default: 10},

    // Wave amplitude and variance.
    amplitude: {default: 0.1},
    amplitudeVariance: {default: 0.3},

    // Wave speed and variance.
    speed: {default: 1},
    speedVariance: {default: 2},

    // Material.
    color: {default: '#7AD2F7', type: 'color'},
    opacity: {default: 0.8}
  },

  /**
   * Use play() instead of init(), because component mappings – unavailable as dependencies – are
   * not guaranteed to have parsed when this component is initialized.
   * /
  play: function () {
    const el = this.el,
        data = this.data;
    let material = el.components.material;

    const geometry = new THREE.PlaneGeometry(data.width, data.depth, data.density, data.density);
    geometry.mergeVertices();
    this.waves = [];
    for (let v, i = 0, l = geometry.vertices.length; i < l; i++) {
      v = geometry.vertices[i];
      this.waves.push({
        z: v.z,
        ang: Math.random() * Math.PI * 2,
        amp: data.amplitude + Math.random() * data.amplitudeVariance,
        speed: (data.speed + Math.random() * data.speedVariance) / 1000 // radians / frame
      });
    }

    if (!material) {
      material = {};
      material.material = new THREE.MeshPhongMaterial({
        color: data.color,
        transparent: data.opacity < 1,
        opacity: data.opacity,
        shading: THREE.FlatShading,
      });
    }

    this.mesh = new THREE.Mesh(geometry, material.material);
    el.setObject3D('mesh', this.mesh);
  },

  remove: function () {
    this.el.removeObject3D('mesh');
  },

  tick: function (t, dt) {
    if (!dt) return;

    const verts = this.mesh.geometry.vertices;
    for (let v, vprops, i = 0; (v = verts[i]); i++){
      vprops = this.waves[i];
      v.z = vprops.z + Math.sin(vprops.ang) * vprops.amp;
      vprops.ang += vprops.speed * dt;
    }
    this.mesh.geometry.verticesNeedUpdate = true;
  }
});

Navigate back to your index.html file. In the <head> of your code, import the new JavaScript file:

 https://cdn.jsdelivr.net...
  http://./assets/ocean.js
</head>

Navigate to the <!-- Ocean --> section of your code. Replace the a-box to an a-ocean. Just as before, we set amplitude and amplitude-variance of our low-poly object.

<!-- Ocean -->
<a-ocean depth="50" width="50" amplitude="0" amplitude-variance="0.1" speed="1.5" speed-variance="1" opacity="1" density="50"></a-ocean>
<a-ocean depth="50" width="50" opacity="0.5" amplitude="0" amplitude-variance="0.15" speed="1.5" speed-variance="1" density="50"></a-ocean>

For your final aesthetic modification, add a white round cursor to indicate where the user is pointing. Navigate to the <!-- Camera -->.

<!-- Camera -->
<a-camera ...>
  <a-entity id="cursor-mobile" cursor="fuse: true; fuseTimeout: 250"
    position="0 0 -1"
    geometry="primitive: ring; radiusInner: 0.02; radiusOuter: 0.03"
    material="color: white; shader: flat"
    scale="0.5 0.5 0.5"
    raycaster="far: 50; interval: 1000; objects: .clickable">
  <a-animation begin="fusing" easing="ease-in" attribute="scale"
    fill="backwards" from="1 1 1" to="0.2 0.2 0.2" dur="250"></a-animation>
  </a-camera>

Ensure that your index.html code matches the Step 2 source code. Navigate to your preview, and you’ll find the updated ocean along with a white circle fixed to the center of your view.

Bobbing player with fluctuating light
Bobbing player with fluctuating light (Large preview)

This concludes your aesthetic improvements to the scene. In this section, you learned how to use and configure low-poly versions of A-Frame primitives, e.g. lp-cone. In addition, you added a number of animations for different object attributes, such as position, rotation, and light intensity. In the next step, you will add the ability for the user to control the player — just by looking at different lanes.

Step 3: Add Virtual Reality Gaze Controls

Recall that our audience is a user wearing a virtual reality headset. As a result, your game cannot depend on keyboard input for controls. To make this game accessible, our VR controls will rely only on the user’s head rotation. Simply look to the right to move the player to the right, look to the center to move to the middle, and look to the left to move to the left. Our final product will look like the following.

Note: The demo GIF below was recorded on a desktop, with user drag as a substitute for head rotation.

Controlling game character with head rotation
Controlling game character with head rotation (Large preview)

Start from your index.html file. In the <head>...</head> tag, import your new JavaScript file, assets/ergo.js. This new JavaScript file will contain the game’s logic.

 http://...
  http://./assets/ergo.js
</head>

Then, add a new lane-controls attribute to your a-camera object:

<!-- Camera -->
<a-camera lane-controls position...>
</a-camera>

Next, create your new JavaScript file using “+ New File” to the left. Use assets/ergo.js for the filename. For the remainder of this step, you will be working in this new JavaScript file. In this new file, define a new function to setup controls, and invoke it immediately. Make sure to include the comments below, as we will refer to sections of code by those names.

/************
 * CONTROLS *
 ************/

function setupControls() {
}

/********
 * GAME *
 ********/

setupControls();

Note: The setupControls function is invoked in the global scope, because A-Frame components must be registered before the <a-scene> tag. I will explain what a component is below.

In your setupControls function, register a new A-Frame component. A component modifies an entity in A-Frame, allowing you to add custom animations, change how an entity initializes, or respond to user input. There are many other use cases, but you will focus on the last one: responding to user input. Specifically, you will read user rotation and move the player accordingly.

In the setupControls function, register the A-Frame component we added to the camera earlier, lane-controls. We will add an event listener for the tick event. This event triggers at every animation frame. In this event listener, hlog output at every tick.

function setupControls() {
    AFRAME.registerComponent('lane-controls', {
        tick: function(time, timeDelta) {
            console.log(time);
        }
    });
}

Navigate to your preview. Open your browser developer console by right-clicking anywhere and selecting “Inspect”. This applies to Firefox, Chrome, and Safari. Then, select “Console” from the top navigation bar. Ensure that you see timestamps flowing into the console.


Timestamps in console
Timestamps in console (Large preview)

Navigate back to your editor. Still in assets/ergo.js, replace the body of setupControls with the following. Fetch the camera rotation using this.el.object3D.rotation, and log the lane to move the player to.

function setupControls() {
  AFRAME.registerComponent('lane-controls', {
    tick: function (time, timeDelta) {
      var rotation = this.el.object3D.rotation;

      if      (rotation.y > 0.1)  console.log("left");
      else if (rotation.y < -0.1) console.log("right");
      else                        console.log("middle");
    }
  })
}

Navigate back to your preview. Again, open your developer console. Try rotating the camera slightly, and observe console output update accordingly.

Lane log based on camera rotation
Lane log based on camera rotation (Large preview)

Before the controls section, add three constants representing the left, middle, and right lane x values.

const POSITION_X_LEFT = -0.5;
const POSITION_X_CENTER = 0;
const POSITION_X_RIGHT = 0.5;

/************
 * CONTROLS *
 ************/

...

At the start of the controls section, define a new global variable representing the player position.

/************
 * CONTROLS *
 ************/

// Position is one of 0 (left), 1 (center), or 2 (right)
var player_position_index = 1;

function setupControls() {
...

After the new global variable, define a new function that will move the player to each lane.

var player_position_index = 1;

/**
 * Move player to provided index
 * @param {int} Lane to move player to
 */
function movePlayerTo(position_index) {
}

function setupControls() {
...

Inside this new function, start by updating the global variable. Then, define a dummy position.

function movePlayerTo(position_index) {
  player_position_index = position_index;

  var position = {x: 0, y: 0, z: 0}
}

After defining the position, update it according to the function input.

function movePlayerTo(position_index) {
  ...
  if      (position_index == 0) position.x = POSITION_X_LEFT;
  else if (position_index == 1) position.x = POSITION_X_CENTER;
  else                          position.x = POSITION_X_RIGHT;
}

Finally, update the player position.

function movePlayerTo(position_index) {
    ...
document.getElementById('player').setAttribute('position', position);
}

Double-check that your function matches the following.

/**
 * Move player to provided index
 * @param {int} Lane to move player to
 */
function movePlayerTo(position_index) {
  player_position_index = position_index;
  
  var position = {x: 0, y: 0, z: 0}
  if      (position_index == 0) position.x = POSITION_X_LEFT;
  else if (position_index == 1) position.x = POSITION_X_CENTER;
  else                          position.x = POSITION_X_RIGHT;
  document.getElementById('player').setAttribute('position', position);
}

Navigate back to your preview. Open the developer console. Invoke your new movePlayerTo function from the console to ensure that it functions.

> movePlayerTo(2)  # should move to right

Navigate back to your editor. For the final step, update your setupControls to move the player depending on camera rotation. Here, we replace the console.log with movePlayerTo invocations.

function setupControls() {
  AFRAME.registerComponent('lane-controls', {
    tick: function (time, timeDelta) {
      var rotation = this.el.object3D.rotation;

      if      (rotation.y > 0.1)  movePlayerTo(0);
      else if (rotation.y < -0.1) movePlayerTo(2);
      else                        movePlayerTo(1);
    }
  })
}

Ensure that your assets/ergo.js matches the corresponding file in the Step 3 source code. Navigate back to your preview. Rotate the camera from side to side, and your player will now track the user’s rotation.

Controlling game character with head rotation
Controlling game character with head rotation (Large preview)

This concludes gaze controls for your virtual reality endless runner game.

In this section, we learned how to use A-Frame components and saw how to modify A-Frame entity properties. This also concludes part 1 of our endless runner game tutorial. You now have a virtual reality model equipped with aesthetic improvements like low-poly stylization and animations, in addition to a virtual-reality-headset-friendly gaze control for players to use.

Conclusion

We created a simple, interactive virtual reality model, as a start for our VR endless runner game. We covered a number of A-Frame concepts such as primitives, animations, and components — all of which are necessary for building a game on top of A-Frame VR.

Here are extra resources and next steps for working more with these technologies:

  • A-Frame VR
    Official documentation for A-Frame VR, covering the topics used above in more detail.
  • A-Frame Homepage
    Examples of A-Frame projects, exhibiting different A-Frame capabilities.
  • Low-Poly Island
    VR model using the same lighting, textures, and animations as the ones used for this endless runner game.

In the next part of this article series, I’ll show you how you can implement the game’s core logic and use more advanced A-Frame VR scene manipulations in JavaScript.

Stay tuned for next week!

Smashing Editorial
(rb, ra, il)

Source: Smashing Magazine, How To Build An Endless Runner Game In Virtual Reality (Part 1)

Building Robust Layouts With Container Units

dreamt up by webguru in Uncategorized | Comments Off on Building Robust Layouts With Container Units

Building Robust Layouts With Container Units

Building Robust Layouts With Container Units

Russell Bishop



Container units are a specialized set of CSS variables that allow you to build grids, layouts, and components using columns and gutters. They mirror the layout functionality found in UI design software where configuring just three values provides your document with a global set of columns and gutters to measure and calculate from.

They also provide consistent widths everywhere in your document — regardless of their nesting depth, their parent’s width, or their sibling elements. So instead of requiring a repeated set of .grid and .row parent elements, container units measure from the :root of your document — just like using a rem unit.

container units measure from the root of your document just like using a rem unit
(Large preview)

What Makes Container Units Different?

Grids from popular frameworks (such as Bootstrap or Bulma) share the same fundamental limitation: they rely on relative units such as ‘percentages’ to build columns and gutters.

This approach ties developers to using a specific HTML structure whenever they want to use those measurements and requires parent > child nesting for widths to calculate correctly.

Not convinced? Try for yourself:

  • Open any CSS framework’s grid demo;
  • Inspect a column and note the width;
  • Using DevTools, drag that element somewhere else in the document;
  • Note that the column’s width has changed in transit.

Freedom Of Movement (…Not Brexit)

Container units allow you more freedom to size elements using a set of global units. If you want to build a sidebar the width of three columns, all you need is the following:

.sidebar {
  width: calc(3 * var(--column-unit));
  /* or columns(3) */
}

Your ...class="sidebar">... element can live anywhere inside of your document — without specific parent elements or nesting.

Measuring three columns and using them for a sidebar.
Measuring three columns and using them for a sidebar (Large preview)

Sharing Tools With Designers

Designers and developers have an excellent middle-ground that helps translate from design software to frontend templates: numbers.

Modular scales are exceptional not just because they help designers bring harmony to their typography, but also because developers can replicate them as a simple system. The same goes for Baseline Grids: superb, self-documenting systems with tiny configuration (one root number) and massive consistency.

Container units are set up in the same way that designers use Sketch to configure Layout Settings:


Layout settings
Layout settings (Large preview)

Sketch gridlines
Sketch gridlines (Large preview)

Any opportunity for designers and developers to build with the same tools is a huge efficiency boost and fosters new thinking in both specialisms.

Start Building With Container Units

Define your grid proportions with three values:

:root {
  --grid-width: 960;
  --grid-column-width: 60;
  --grid-columns: 12;
}

These three values define how wide a column is in proportion to your grid. In the example above, a column’s width is 60 / 960. Gutters are calculated automatically from the remaining space.

Finally, set a width for your container:

:root {
  --container-width: 84vw;
}

Note: --container-width should be set as an absolute unit. I recommend using viewport units or rems.

You can update your --container-width at any breakpoint (all of your container units will update accordingly):

@media (min-width: 800px) {
  --container-width: 90vw;
}
    
@media (min-width: 1200px) {
  --container-width: 85vw;
}
    
/* what about max-width? */
@media (min-width: 1400px) {
  --container-width: 1200px;
}
breakpoints
Breakpoints (Large preview)

You’ve now unlocked two very robust units to build from:

  1. --column-unit
  2. --gutter-unit

Column Spans: The Third And Final Weapon

More common than building with either columns or gutters is to span across both of them:

6 column span = 6 columns + 5 gutters
6 column span = 6 columns + 5 gutters (Large preview)

Column spans are easy to calculate, but not very pretty to write. For spanning across columns, I would recommend using a pre-processor:

.panel {
  /* vanilla css */
  width: calc(6 * var(--column-and-gutter-unit) - var(--gutter-unit));
  
  /* pre-processor shortcut */
  width: column-spans(6);  
}

Of course, you can use pre-processor shortcuts for every container unit I’ve mentioned so far. Let’s put them to the test with a design example.

Building Components With Container Units

Let’s take a design example and break it down:


design example
(Large preview)

This example uses columns, gutters and column spans. Since we’re just storing a value, container units can be used for other CSS properties, like defining a height or providing padding:

.background-image {
  width: column-spans(9);
  padding-bottom: gutters(6);
  /* 6 gutters taller than the foreground banner */
}

.foreground-banner {
  width: column-spans(8);
  padding: gutters(2);
}

.button {
  height: gutters(3);
  padding: gutters(1);
}

Grab The Code

:root {
  /* Grid proportions */
  --grid-width: 960;
  --grid-column-width: 60;
  --grid-columns: 12;

  /* Grid logic */
  --grid-gutters: calc(var(--grid-columns) - 1);

  /* Grid proportion logic */
  --column-proportion: calc(var(--grid-column-width) / var(--grid-width));
  --gutter-proportion: calc((1 - (var(--grid-columns) * var(--column-proportion))) / var(--grid-gutters));

  /* Container Units */
  --column-unit: calc(var(--column-proportion) * var(--container-width));
  --gutter-unit: calc(var(--gutter-proportion) * var(--container-width));
  --column-and-gutter-unit: calc(var(--column-unit) + var(--gutter-unit));

  /* Container Width */
  --container-width: 80vw;
}

@media (min-width: 1000px) {
  :root {
    --container-width: 90vw;
  }
}

@media (min-width: 1400px) {
  :root {
    --container-width: 1300px;
  }
}

Why Use CSS Variables?

“Pre-processors have been able to do that for years with $variables — why do you need CSS variables?”

Not… quite. Although you can use variables to run calculations, you cannot avoid compiling unnecessary code when one of the variables updates it’s value.

Let’s take the following condensed example of a grid:

.grid {
  $columns: 2;
  $gutter: $columns * 1rem;
  display: grid;
  grid-template-columns: repeat($columns, 1fr);
  grid-gap: $gutter;

  @media (min-width: $medium) {
    $columns: 3;
    grid-template-columns: repeat($columns, 1fr);
    grid-gap: $gutter;
  }
  
  @media (min-width: $large) {
    $columns: 4;
    grid-template-columns: repeat($columns, 1fr);
    grid-gap: $gutter;
  }
}

This example shows how every reference to a SASS/LESS variable has to be re-compiled if the variable changes — duplicating code over and over for each instance.

But CSS Variables share their logic with the browser, so browsers can do the updating for you.

.grid {
  --columns: 2;
  --gutter: calc(var(--columns) * 1rem);
  display: grid;
  grid-template-columns: repeat(var(--columns), 1fr);
  grid-gap: var(--gutter);

  @media (min-width: $medium) {
    --columns: 3;
  }
  
  @media (min-width: $large) {
    --columns: 4;
  }
}

This concept helps form the logic of container units; by storing logic once at the root, every element in your document watches those values as they update, and responds accordingly.

Give it a try!

Recommended Reading

Smashing Editorial
(dm, ra, il)

Source: Smashing Magazine, Building Robust Layouts With Container Units

Using Composer With WordPress

dreamt up by webguru in Uncategorized | Comments Off on Using Composer With WordPress

Using Composer With WordPress

Using Composer With WordPress

Leonardo Losoviz



WordPress is getting modernized. The recent inclusion of JavaScript-based Gutenberg as part of the core has added modern capabilities for building sites on the frontend, and the upcoming bump of PHP’s minimum version, from the current 5.2.4 to 5.6 in April 2019 and 7.0 in December 2019, will make available a myriad of new features to build powerful sites.

In my previous article on Smashing in which I identified the PHP features newly available to WordPress, I argued that the time is ripe to make components the basic unit for building functionalities in WordPress. On one side, Gutenberg already makes the block (which is a high-level component) the basic unit to build the webpage on the frontend; on the other side, by bumping up the required minimum version of PHP, the WordPress backend has access to the whole collection of PHP’s Object-Oriented Programming features (such as classes and objects, interfaces, traits and namespaces), which are all part of the toolset to think/code in components.

So, why components? What’s so great about them? A “component” is not an implementation (such as a React component), but instead, it’s a concept: It represents the act of encapsulating properties inside objects, and grouping objects together into a package which solves a specific problem. Components can be implemented for both the frontend (like those coded through JavaScript libraries such as React or Vue, or CSS component libraries such as Bootstrap) and the backend.

We can use already-created components and customize them for our projects, so we will boost our productivity by not having to reinvent the wheel each single time, and because of their focus on solving a specific issue and being naturally decoupled from the application, they can be tested and bug-fixed very easily, thus making the application more maintainable in the long term.

The concept of components can be employed for different uses, so we need to make sure we are talking about the same use case. In a previous article, I described how to componentize a website; the goal was to transform the webpage into a series of components, wrapping each other from a single topmost component all the way down to the most basic components (to render the layout). In that case, the use case for the component is for rendering — similar to a React component but coded in the backend. In this article, though, the use case for components is importing and managing functionality into the application.

Introduction To Composer And Packagist

To import and manage own and third-party components into our PHP projects, we can rely on the PHP-dependency manager Composer which by default retrieves packages from the PHP package repository Packagist (where a package is essentially a directory containing PHP code). With their ease of use and exceptional features, Composer + Packagist have become key tools for establishing the foundations of PHP-based applications.

Composer allows to declare the libraries the project depends on and it will manage (install/update) them. It works recursively: libraries depended-upon by dependencies will be imported to the project and managed too. Composer has a mechanism to resolve conflicts: If two different libraries depend on a different version of a same library, Composer will try to find a version that is compatible with both requirements, or raise an error if not possible.

To use Composer, the project simply needs a composer.json file in its root folder. This file defines the dependencies of the project (each for a specific version constraint based on semantic versioning) and may contain other metadata as well. For instance, the following composer.json file makes a project require nesbot/carbon, a library providing an extension for DateTime, for the latest patch of its version 2.12:

{
    "require": {
        "nesbot/carbon": "2.12.*"
    }
}

We can edit this file manually, or it can be created/updated through commands. For the case above, we simply open a terminal window, head to the project’s root directory, and type:

composer require "nesbot/carbon"

This command will search for the required library in Packagist (which is found here) and add its latest version as a dependency on the existing composer.json file. (If this file doesn’t yet exist, it will first create it.) Then, we can import the dependencies into the project, which are by default added under the vendor/ folder, by simply executing:

composer install

Whenever a dependency is updated, for instance nesbot/carbon released version 2.12.1 and the currently installed one is 2.12.0, then Composer will take care of importing the corresponding library by executing:

composer update

If we are using Git, we only have to specify the vendor/ folder on the .gitignore file to not commit the project dependencies under version control, making it a breeze to keep our project’s code thoroughly decoupled from external libraries.

Composer offers plenty of additional features, which are properly described in the documentation. However, already in its most basic use, Composer gives developers unlimited power for managing the project’s dependencies.

Introduction To WPackagist

Similar to Packagist, WPackagist is a PHP package repository. However, it comes with one particularity: It contains all the themes and plugins hosted on the WordPress plugin and theme directories, making them available to be managed through Composer.

To use WPackagist, our composer.json file must include the following information:

{
    "repositories":[
        {
            "type":"composer",
            "url":"https://wpackagist.org"
        }
    ]
}

Then, any theme and plugin can be imported to the project by using "wpackagist-theme" and "wpackagist-plugin" respectively as the vendor name, and the slug of the theme or plugin under the WordPress directory (such as "akismet" in https://wordpress.org/plugins/akismet/) as the package name. Because themes do not have a trunk version, then the theme’s version constraint is recommended to be “*”:

{
    "require": {
        "wpackagist-plugin/akismet":"^4.1",
        "wpackagist-plugin/bbpress":">=2.5.12",
        "wpackagist-theme/twentynineteen":"*"
    }
}

Packages available in WPackagist have been given the type “wordpress-plugin” or “wordpress-theme”. As a consequence, after running composer update, instead of installing the corresponding themes and plugins under the default folder vendor/, these will be installed where WordPress expects them: under folders wp-content/themes/ and wp-content/plugins/ respectively.

Possibilities And Limitations Of Using WordPress And Composer Together

So far, so good: Composer makes it a breeze to manage a PHP project’s dependencies. However, WordPress’ core hasn’t adopted it as its dependency management tool of choice, primarily because WordPress is a legacy application that was never designed to be used with Composer, and the community can’t agree if WordPress should be considered the site or a site’s dependency, and integrating these approaches requires hacks.

In this concern, WordPress is outperformed by newer frameworks which could incorporate Composer as part of their architecture. For instance, Laravel underwent a major rewriting in 2013 to establish Composer as an application-level package manager. As a consequence, WordPress’ core still does not include the composer.json file required to manage WordPress as a Composer dependency.

Knowing that WordPress can’t be natively managed through Composer, let’s explore the ways such support can be added, and what roadblocks we encounter in each case.

There are three basic ways in which WordPress and Composer can work together:

  1. Manage dependencies when developing a theme or a plugin;
  2. Manage themes and plugins on a site;
  3. Manage the site completely (including its themes, plugins and WordPress’ core).

And there are two basic situations concerning who will have access to the software (a theme or plugin, or the site):

  1. The developer can have absolute control of how the software will be updated, e.g. by managing the site for the client, or providing training on how to do it;
  2. The developer doesn’t have absolute control of the admin user experience, e.g. by releasing themes or plugins through the WordPress directory, which will be used by an unknown party.

From the combination of these variables, we will have more or less freedom in how deep we can integrate WordPress and Composer together.

From a philosophical aspect concerning the objective and target group of each tool, while Composer empowers developers, WordPress focuses primarily on the needs of the end users first, and only then on the needs of the developers. This situation is not self-contradictory: For instance, a developer can create and launch the website using Composer, and then hand the site over to the end user who (from that moment on) will use the standard procedures for installing themes and plugins — bypassing Composer. However, then the site and its composer.json file fall out of sync, and the project can’t be managed reliably through Composer any longer: Manually deleting all plugins from the wp-content/plugins/ folder and executing composer update will not re-download those plugins added by the end user.

The alternative to keeping the project in sync would be to ask the user to install themes and plugins through Composer. However, this approach goes against WordPress’ philosophy: Asking the end user to execute a command such as composer install to install the dependencies from a theme or plugin adds friction, and WordPress can’t expect every user to be able to execute this task, as simple as it may be. So this approach can’t be the default; instead, it can be used only if we have absolute control of the user experience under wp-admin/, such as when building a site for our own client and providing training on how to update the site.

The default approach, which handles the case when the party using the software is unknown, is to release themes and plugins with all of their dependencies bundled in. This implies that the dependencies must also be uploaded to WordPress’ plugin and theme subversion repositories, defeating the purpose of Composer. Following this approach, developers are still able to use Composer for development, however, not for releasing the software.

This approach is not failsafe either: If two different plugins bundle different versions of a same library which are incompatible with each other, and these two plugins are installed on the same site, it could cause the site to malfunction. A solution to this issue is to modify the dependencies’ namespace to some custom namespace, which ensures that different versions of the same library, by having different namespaces, are treated as different libraries. This can be achieved through a custom script or through Mozart, a library which composes all dependencies as a package inside a WordPress plugin.

For managing the site completely, Composer must install WordPress under a subdirectory as to be able to install and update WordPress’ core without affecting other libraries, hence the setup must consider WordPress as a site’s dependency and not the site itself. (Composer doesn’t take a stance: This decision is for the practical purpose of being able to use the tool; from a theoretical perspective, we can still consider WordPress to be the site.) Because WordPress can be installed in a subdirectory, this doesn’t represent a technical issue. However, WordPress is by default installed on the root folder, and installing it in a subdirectory involves a conscious decision taken by the user.

To make it easier to completely manage WordPress with Composer, several projects have taken the stance of installing WordPress in a subfolder and providing an opinionated composer.json file with a setup that works well: core contributor John P. Bloch provides a mirror of WordPress’ core, and Roots provides a WordPress boilerplate called Bedrock. I will describe how to use each of these two projects in the sections below.

Managing The Whole WordPress Site Through John P. Bloch’s Mirror Of WordPress Core

I have followed Andrey “Rarst” Savchenko’s recipe for creating the whole site’s Composer package, which makes use of John P. Bloch’s mirror of WordPress’ core. Following, I will reproduce his method, adding some extra information and mentioning the gotchas I found along the way.

First, create a composer.json file with the following content in the root folder of your project:

{
    "type": "project",
    "config": {
        "vendor-dir": "content/vendor"
    },
    "extra": {
        "wordpress-install-dir": "wp"
    },
    "require": {
        "johnpbloch/wordpress": ">=5.1"
    }
}

Through this configuration, Composer will install WordPress 5.1 under folder "wp", and dependencies will be installed under folder "content/vendor". Then head to the project’s root folder in terminal and execute the following command for Composer to do its magic and install all dependencies, including WordPress:

composer install --prefer-dist

Let’s next add a couple of plugins and the theme, for which we must also add WPackagist as a repository, and let’s configure these to be installed under "content/plugins" and "content/themes" respectively. Because these are not the default locations expected by WordPress, we will later on need to tell WordPress where to find them through constant WP_CONTENT_DIR.

Note: WordPress’ core includes by default a few themes and plugins under folders "wp/wp-content/themes" and "wp/wp-content/plugins", however, these will not be accessed.

Add the following content to composer.json, in addition to the previous one:

{
    "repositories": [
        {
            "type": "composer",
            "url" : "https://wpackagist.org"
        }
    ],
    "require": {
        "wpackagist-plugin/wp-super-cache": "1.6.*",
        "wpackagist-plugin/bbpress": "2.5.*",
        "wpackagist-theme/twentynineteen": "*"
    },
    "extra": {
        "installer-paths": {
            "content/plugins/{$name}/": ["type:wordpress-plugin"],
            "content/themes/{$name}/": ["type:wordpress-theme"]
        }
    }
}

And then execute in terminal:

composer update --prefer-dist

Hallelujah! The theme and plugins have been installed! Since all dependencies are distributed across folders wp, content/vendors, content/plugins and content/themes, we can easily ignore these when committing our project under version control through Git. For this, create a .gitignore file with this content:

wp/
content/vendor/
content/themes/
content/plugins/

Note: We could also directly ignore folder content/, which will already ignore all media files under content/uploads/ and files generated by plugins, which most likely must not go under version control.

There are a few things left to do before we can access the site. First, duplicate the wp/wp-config-sample.php file into wp-config.php (and add a line with wp-config.php to the .gitignore file to avoid committing it, since this file contains environment information), and edit it with the usual information required by WordPress (database information and secret keys and salts). Then, add the following lines at the top of wp-config.php, which will load Composer’s autoloader and will set constant WP_CONTENT_DIR to folder content/:

// Load Composer’s autoloader
require_once (__DIR__.'/content/vendor/autoload.php');

// Move the location of the content dir
define('WP_CONTENT_DIR', dirname(__FILE__).'/content');

By default, WordPress sets constant WP_CONSTANT_URL with value get_option('siteurl').'/wp-content'. Because we have changed the content directory from the default "wp-content" to "content", we must also set the new value for WP_CONSTANT_URL. To do this, we can’t reference function get_option since it hasn’t been defined yet, so we must either hardcode the domain or, possibly better, we can retrieve it from $_SERVER like this:

$s = empty($_SERVER["HTTPS"]) ? '' : ($_SERVER["HTTPS"] == "on") ? "s" : "";
$sp = strtolower($_SERVER["SERVER_PROTOCOL"]);
$protocol = substr($sp, 0, strpos($sp, "/")) . $s;
$port = ($_SERVER["SERVER_PORT"] == "80") ? "" : (":".$_SERVER["SERVER_PORT"]);
define('WP_CONTENT_URL', $protocol."://".$_SERVER[’SERVER_NAME'].$port.'/content');

We can now access the site on the browser under domain.com/wp/, and proceed to install WordPress. Once the installation is complete, we log into the Dashboard and activate the theme and plugins.

Finally, because WordPress was installed under subdirectory wp, the URL will contain path “/wp” when accessing the site. Let’s remove that (not for the admin side though, which by being accessed under /wp/wp-admin/ adds an extra level of security to the site).

The documentation proposes two methods to do this: with or without URL change. I followed both of them, and found the without URL change a bit unsatisfying because it requires specifying the domain in the .htaccess file, thus mixing application code and configuration information together. Hence, I’ll describe the method with URL change.

First, head to “General Settings” which you’ll find under domain.com/wp/wp-admin/options-general.php and remove the “/wp” bit from the “Site Address (URL)” value and save. After doing so, the site will be momentarily broken: browsing the homepage will list the contents of the directory, and browsing a blog post will return a 404. However, don’t panic, this will be fixed in the next step.

Next, we copy the index.php file to the root folder, and edit this new file, adding “wp/” to the path of the required file, like this:

/** Loads the WordPress Environment and Template */
require( dirname( __FILE__ ) . '/wp/wp-blog-header.php' );

We are done! We can now access our site in the browser under domain.com:


WordPress site
WordPress site successfully installed through Composer (Large preview)

Even though it has downloaded the whole WordPress core codebase and several libraries, our project itself involves only six files from which only five need to be committed to Git:

  1. .gitignore
  2. composer.json
  3. composer.lock
    This file is generated automatically by Composer, containing the versions of all installed dependencies.
  4. index.php
    This file is created manually.
  5. .htaccess
    This file is automatically created by WordPress, so we could avoid committing it, however, we may soon customize it for the application, in which case it requires committing.

The remaining sixth file is wp-config.php which must not be committed since it contains environment information.

Not bad!

The process went pretty smoothly, however, it could be improved if the following issues are dealt better:

  1. Some application code is not committed under version control.
    Since it contains environment information, the wp-config.php file must not be committed to Git, instead requiring to maintain a different version of this file for each environment. However, we also added a line of code to load Composer’s autoloader in this file, which will need to be replicated for all versions of this file across all environments.
  2. The installation process is not fully automated.
    After installing the dependencies through Composer, we must still install WordPress through its standard procedure, log-in to the Dashboard and change the site URL to not contain “wp/”. Hence, the installation process is slightly fragmented, involving both a script and a human operator.

Let’s see next how Bedrock fares for the same task.

Managing The Whole WordPress Site Through Bedrock

Bedrock is a WordPress boilerplate with an improved folder structure, which looks like this:

├── composer.json
├── config
│   ├── application.php
│   └── environments
│       ├── development.php
│       ├── staging.php
│       └── production.php
├── vendor
└── web
    ├── app
    │   ├── mu-plugins
    │   ├── plugins
    │   ├── themes
    │   └── uploads
    ├── wp-config.php
    ├── index.php
    └── wp

The people behind Roots chose this folder structure in order to make WordPress embrace the Twelve Factor App, and they elaborate how this is accomplished through a series of blog posts. This folder structure can be considered an improvement over the standard WordPress one on the following accounts:

  • It adds support for Composer by moving WordPress’ core out of the root folder and into folder web/wp;
  • It enhances security, because the configuration files containing the database information are not stored within folder web, which is set as the web server’s document root (the security threat is that, if the web server goes down, there would be no protection to block access to the configuration files);
  • The folder wp-content has been renamed as “app”, which is a more standard name since it is used by other frameworks such as Symfony and Rails, and to better reflect the contents of this folder.

Bedrock also introduces different config files for different environments (development, staging, production), and it cleanly decouples the configuration information from code through library PHP dotenv, which loads environment variables from a .env file which looks like this:

DB_NAME=database_name
DB_USER=database_user
DB_PASSWORD=database_password

# Optionally, you can use a data source name (DSN)
# When using a DSN, you can remove the DB_NAME, DB_USER, DB_PASSWORD, and DB_HOST variables
# DATABASE_URL=mysql://database_user:database_password@database_host:database_port/database_name

# Optional variables
# DB_HOST=localhost
# DB_PREFIX=wp_

WP_ENV=development
WP_HOME=http://example.com
WP_SITEURL=${WP_HOME}/wp

# Generate your keys here: https://roots.io/salts.html
AUTH_KEY='generateme'
SECURE_AUTH_KEY='generateme'
LOGGED_IN_KEY='generateme'
NONCE_KEY='generateme'
AUTH_SALT='generateme'
SECURE_AUTH_SALT='generateme'
LOGGED_IN_SALT='generateme'
NONCE_SALT='generateme'

Let’s proceed to install Bedrock, following their instructions. First create a project like this:

composer create-project "roots/bedrock"

This command will bootstrap the Bedrock project into a new folder “bedrock”, setting up the folder structure, installing all the initial dependencies, and creating an .env file in the root folder which must contain the site’s configuration. We must then edit the .env file to add the database configuration and secret keys and salts, as would normally be required in wp-config.php file, and also to indicate which is the environment (development, staging, production) and the site’s domain.

Next, we can already add themes and plugins. Bedrock comes with themes twentyten to twentynineteen shipped by default under folder web/wp/wp-content/themes, but when adding more themes through Composer these are installed under web/app/themes. This is not a problem, because WordPress can register more than one directory to store themes through function register_theme_directory.

Bedrock includes the WPackagist information in the composer.json file, so we can already install themes and plugins from this repository. To do so, simply step on the root folder of the project and execute the composer require command for each theme and plugin to install (this command already installs the dependency, so there is no need to execute composer update):

cd bedroot
composer require "wpackagist-theme/zakra"
composer require "wpackagist-plugin/akismet":"^4.1"
composer require "wpackagist-plugin/bbpress":">=2.5.12"

The last step is to configure the web server, setting the document root to the full path for the web folder. After this is done, heading to domain.com in the browser we are happily greeted by WordPress installation screen. Once the installation is complete, we can access the WordPress admin under domain.com/wp/wp-admin and activate the installed theme and plugins, and the site is accessible under domain.com. Success!

Installing Bedrock was pretty smooth. In addition, Bedrock does a better job at not mixing the application code with environment information in the same file, so the issue concerning application code not being committed under version control that we got with the previous method doesn’t happen here.

Conclusion

With the launch of Gutenberg and the upcoming bumping up of PHP’s minimum required version, WordPress has entered an era of modernization which provides a wonderful opportunity to rethink how we build WordPress sites to make the most out of newer tools and technologies. Composer, Packagist, and WPackagist are such tools which can help us produce better WordPress code, with an emphasis on reusable components to produce modular applications which are easy to test and bugfix.

First released in 2012, Composer is not precisely what we would call “new” software, however, it has not been incorporated to WordPress’ core due to a few incompatibilities between WordPress’ architecture and Composer’s requirements. This issue has been an ongoing source of frustration for many members of the WordPress development community, who assert that the integration of Composer into WordPress will enhance creating and releasing software for WordPress. Fortunately, we don’t need to wait until this issue is resolved since several actors took the matter into their own hands to provide a solution.

In this article, we reviewed two projects which provide an integration between WordPress and Composer: manually setting our composer.json file depending on John P. Bloch’s mirror of WordPress’ core, and Bedrock by Roots. We saw how these two alternatives, which offer a different amount of freedom to shape the project’s folder structure, and which are more or less smooth during the installation process, can succeed at fulfilling our requirement of completely managing a WordPress site, including the installation of the core, themes, and plugins.

If you have any experience using WordPress and Composer together, either through any of the described two projects or any other one, I would love to see your opinion in the comments below.

I would like to thank Andrey “Rarst” Savchenko, who reviewed this article and provided invaluable feedback.

Further Reading on SmashingMag:

Smashing Editorial
(rb, ra, il)

Source: Smashing Magazine, Using Composer With WordPress

Organizing Brainstorming Workshops: A Designer’s Guide

dreamt up by webguru in Uncategorized | Comments Off on Organizing Brainstorming Workshops: A Designer’s Guide

Organizing Brainstorming Workshops: A Designer’s Guide

Organizing Brainstorming Workshops: A Designer’s Guide

Slava Shestopalov



When you think about the word “brainstorming”, what do you imagine? Maybe a crowd of people who you used to call colleagues, outshouting each other, assaulting the whiteboard, and nearly throwing punches to win control over the projector? Fortunately, brainstorming has a bright side: It’s a civilized process of generating ideas together. At least this is how it appears in the books on creativity. So, can we make it real?

I have already tried the three methodologies presented in this article with friends of mine, so there is no theorizing. After reaching the end of this article, I hope that you’ll be able to organize brainstorming sessions with your colleagues and clients, and co-create something valuable. For instance, ideas about a new mobile application or a design conference agenda.

Building Diverse Design Teams

What is diversity and what does it have to do with design? It’s important to understand that design is not only critical to solving problems on the product and experience level, but also relevant on a bigger scale to close social divides and to create inclusive communities. Learn more →

Universal Principles

Don’t be surprised to notice all brainstorming techniques have much in common. Although “rituals” vary, the essence is the same. Participants look at the subject from different sides and come up with ideas. They write their thoughts down and then make sorting or prioritizing. I know, sounds easy as pie, doesn’t it? But here’s the thing. Without the rules of the game, brainstorming won’t work. It all boils down to just three crucial principles:

  1. The more, the better.
    Brainstorming aims at the quantity, which later turns into quality. The more ideas a team generates the wider choice it gains. It’s normal when two or more participants say the same thing. It’s normal if some ideas are funny. A facilitator’s task is encouraging people to share what is hidden in their mind.
  2. No criticism.
    The goal of brainstorming is to generate a pool of ideas. All ideas are welcome. A boss has no right to silence a subordinate. An analyst shouldn’t make fun of a colleague’s “fantastic” vision. A designer shouldn’t challenge the usability of a teammates’ suggestion.
  3. Follow the steps.
    Only a goal-oriented and time-bound activity is productive, whereas uncontrolled bursts of creativity, as a rule, fail. To make a miracle happen, organize the best conditions for it.

Here are the universal slides you can use as an introduction to any brainstorming technique.


Examples of slides that describe the three core brainstorming rules
(Large preview)

Now when the principles are clear, you are to decide who’s going to participate. The quick answer is diversity. Invite as many different experts as possible including business owners, analysts, marketers, developers, salespeople, potential or real users. All participants should be related to the subject or be interested in it. Otherwise, they’ll fantasize about the topic they’ve never dealt with and don’t want to.

One more thing before we proceed with the three techniques (Six Thinking Hats, Walt Disney’s Creative Strategy, and SCAMPER). When can a designer or other specialist use brainstorming? Here are two typical cases:

  1. There is a niche for a new product, service or feature but the team doesn’t have a concept of what it might be.
  2. An existing product or service is not as successful as expected. The team generally understands the reasons but has no ideas on how to fix it.

1. Six Thinking Hats

The first technique I’d like to present is known as “Six Thinking Hats”. It was invented in 1985 by Edward de Bono, a Maltese physician, psychologist, and consultant. Here’s a quick overview:

Complexity Normal
Subject A process, a service, a product, a feature, anything. For example, one of the topics at our session was the improvement of the designers’ office infrastructure. Another team brainstormed about how to improve the functionality of the Sketch app.
Duration 1–1.5 hours
Facilitation One facilitator for a group of 5–8 members. If there are more people, better divide them into smaller groups and involve assistants. We split our design crew of over 20 people into three workgroups, which were working simultaneously on their topics.

A photo of the brainstorming session using the Six Thinking Hats technique
Brainstorming workshop for the ELEKS design team (Large preview)

Materials

  • Slides with step-by-step instructions.
  • A standalone timer or laptop with an online timer in the fullscreen mode.
  • 6 colored paper hats or any recognizable hat symbols for each participant. The colors are blue, yellow, green, white, red, and black. For example, we used crowns instead of hats, and it was fun.
  • Sticky notes of 6 colors: blue, yellow, green, white, red, and brown or any other dark tint for representing black. 1–2 packs of each color per team of 5–8 people would be enough.
  • A whiteboard or a flip-chart or a large sheet of paper on a table or wall.
  • Black marker pens for each participant (markers should be whiteboard-safe if you choose this kind of surface).

Process

Start a brainstorming session with a five-minute intro. What will participants do? Why is it important? What will the outcome be? What’s next? It’s time to explain the steps. In my case, we described the whole process beforehand to ensure people get the concept of “thinking hats.” De Bono’s “hat” represents a certain way of perceiving reality. Different people are used to “wearing” one favorite “hat” most of the time, which limits creativity and breeds stereotypes.

For example, risk analysts are used to finding weaknesses and threats. That’s why such a phenomenon as gut feeling usually doesn’t ring them a bell.


A set of slide samples for conducting a brainstorming session due to the method of Six Thinking Hats
(Large preview)

Trying on “hats” is a metaphor that helps people to start thinking differently with ease. Below is an example of the slides that explain what each “hat” means. Our goal was to make people feel prepared, relaxed, and not afraid of the procedure complexity.


A set of slide samples for conducting a brainstorming session due to the method of Six Thinking Hats
(Large preview)

The blue “hat” is an odd one out. It has an auxiliary role and embodies the process of brainstorming itself. It starts the session and finishes it. White, yellow, black, red, and green “hats” represent different ways to interpret reality.

For example, the red one symbolizes intuitive and emotional perception. When the black “hat” is on, participants wake up their inner “project manager” and look at the subject through the concepts of budgets, schedule, cost, and revenue.

There are various schemas of “hats” depending on the goal. We wanted to try all the “hats” and chose a universal, all-purpose order:

Blue Preparation
White Collecting available and missing data
Red Listening to emotions and unproven thoughts
Yellow Noticing what is good right now
Green Thinking about improvements and innovations
Black Analyzing risks and resources
Blue Summarizing

A set of slide samples for conducting a brainstorming session due to the method of Six Thinking Hats
(Large preview)

Now the exercise itself. Each slide is a cheat sheet with a task and prompts. When a new step starts and a proper slide appears on the screen, a facilitator starts the timer. Some steps have an extended duration; other steps require less time. For instance, it’s easy to agree on a topic formulation and draw a canvas but writing down ideas is a more time-consuming activity.

When participants see a “hat” slide (except the blue one), they are to generate ideas, write them on sticky notes and put the notes on the whiteboard, flip-chart or paper sheet. For example, the yellow “hat” is displayed on the screen. People put on yellow paper hats and think about the benefits and nice features the subject has now and why it may be useful or attractive. They concisely write these thoughts on the sticky notes of a corresponding color (for the black “hat” — any dark color can be used so that you don’t need to buy special white markers). All the sticky notes of the same color should be put in the corresponding column of the canvas.


A set of slide samples for conducting a brainstorming session due to the method of Six Thinking Hats
(Large preview)

The last step doesn’t follow the original technique. We thought it would be pointless to stick dozens of colored notes and call it a day. We added the Affinity sorting part aimed at summarizing ideas and making the moment of their implementation a bit closer. The teams had to find notes about similar things, group them into clusters and give a name to each cluster.

For example, in the topic “Improvement of the designers’ office infrastructure,” my colleagues created such clusters as “Chair ergonomics,” “Floor and walls,” “Hardware upgrade.”


A set of slide samples for conducting a brainstorming session due to the method of Six Thinking Hats
(Large preview)

We finished the session with the mini-presentations of findings. A representative from each team listed the clusters they came up with and shared the most exciting observation or impression.

Walt Disney’s Creative Strategy

Walt Disney’s creative method was discovered and modeled by Robert Dilts, a neuro-linguistic programming expert, in 1994. Here’s an overview:

Complexity Easy
Subject Anything, especially projects you’ve been postponing for a long time or dreams you cannot start fulfilling for unknown reasons. For example, one of the topics I dealt with was “Improvement of the designer-client communication process.”
Duration 1 hour
Facilitation One facilitator for a group of 5–8 members. When we conducted an educational workshop on brainstorming, my co-trainers and I had four teams of six members working simultaneously in the room.

A photo of the brainstorming session using Walt Disney's Creative Strategy
Educational session on brainstorming for the Projector Design School (Large preview)

Materials

  • Slides with step-by-step instructions.
  • A standalone timer or laptop with an online timer in the fullscreen mode.
  • Standard or large yellow sticky notes (1–2 packs per team of 5–8 people).
  • Small red sticky notes (1–2 packs per team).
  • The tiniest sticky stripes or sticky dots for voting (1 pack per team).
  • A whiteboard or a flip-chart or a large sheet of paper on a table or wall.
  • Black marker pens for each participant (markers should be whiteboard-safe if you choose this kind of surface).

Process

This technique is called after the original thinking manner of Walt Disney, a famous animator and film producer. Disney didn’t use any “technique”; his creative process was intuitive yet productive. Robert Dilts, a neuro-linguistic programming expert, discovered this creative knowhow much later based on the memories of Disney’s colleagues. Although original Dilts’s concept is designed for personal use, we managed to turn it into a group format.


Slide examples for conducting a rainstorming workshop according to Walt Disney's Strategy
(Large preview)

Disney’s strategy works owing to the strict separation of three roles — the dreamer, the realist, and the critic. People are used to mixing these roles while thinking about the future, and that’s why they often fail. “Let’s do X. But it’s so expensive. And risky… Maybe later,” this is how an average person dreams. As a result, innovative ideas get buried in doubts and fears.

In this kind of brainstorming, the facilitator’s goal is to prevent participants from mixing the roles and nipping creative ideas in the bud. We helped the team to get into the mood and extract pure roles through open questions on the slides and introductory explanations.


Slide examples for conducting a rainstorming workshop according to Walt Disney's Strategy
(Large preview)

For example, here is my intro to the first role:

“The dreamer is not restrained by limitations or rules of the real world. The dreamer generates as many ideas as possible and doesn’t think about the obstacles on the way of their implementation. S/he imagines the most fun, easy, simple, and pleasant ways of solving a problem. The dreamer is unaware of criticism, planning, and rationalism altogether.”

As a result, participants should have a bunch of encircled ideas.

When participants come up with the cloud of ideas, they proceed to the next step. It’s important to explain to them what the second role means. I started with the following words:

“The realist is the dreamer’s best friend. The realist is the manager who can convert a vague idea into a step-by-step plan and find necessary resources. The realist has no idea about criticism. He or she tries to find some real-world implementation for dreamer’s ideas, namely who, when, and how can make an idea true.”

Brainstormers write down possible solutions on sticky notes and put them on the corresponding idea circles. Of course, some of the ideas can have no solution, whereas others may be achieved in many ways.


Slide examples for conducting a rainstorming workshop according to Walt Disney's Strategy
(Large preview)

The third role is the trickiest one because people tend to think this is the guy who drags dreamer’s and realist’s work through the mud. Fortunately, this is not true.

I started my explanation:

“The critic is the dreamer’s and realist’s best friend. This person analyses risks and cares about the safety of proposed solutions. The critic doesn’t touch bare ideas but works with solutions only. The critic’s goal is to help and foresee potential issues in advance.”

The team defines risks and writes them down on smaller red notes. A solution can have no risks or several risks.

After that’s done, team members start voting for the ideas they consider worth further working on. They make a decision based on the value of an idea, the availability of solutions, and the severity of connected risks. Ideas without solutions couldn’t be voted for since they had no connection with reality.

During my workshops, each participant had three voting dots. They could distribute them in different ways, e.g. by sticking the dots to three different ideas or supporting one favorite idea with all of the dots they had.


Slide examples for conducting a rainstorming workshop according to Walt Disney's Strategy
(Large preview)

The final activity is roadmapping. The team takes the ideas that gained the most support (typically, 6–10) and arrange them on a timeline depending on the implementation effort. If an idea is easy to put into practice, it goes to the column “Now.” If an idea is complex and requires a lot of preparation or favorable conditions, it’s farther on the timeline.

Of course, there should be time for sharing the main findings. Teams present their timelines with shortlisted ideas and tell about the tendencies they have observed during the exercise.

SCAMPER

This technique was proposed in 1953 by Alex Osborn, best known for co-founding and leading BBDO, a worldwide advertising agency network. A quick overview:

Complexity Normal to difficult
Subject Ideally, technical or tangible things, although the author and evangelists of this method say it’s applicable for anything. From my experience, SCAMPER works less effective with abstract objects. For example, the team barely coped with the topic “Improve communication between a designer and client,” but it worked great for “Invent the best application for digital prototyping.”
Duration Up to 2 hours
Facilitation One facilitator for a group of 5–8 members

A photo of the brainstorming session using the SCAMPER technique
Brainstorming workshop for the ELEKS design team (Large preview)

Materials

  • Slides with step-by-step instructions.
  • A standalone timer or laptop with an online timer in the fullscreen mode.
  • Standard yellow sticky notes (7 packs per team of 5–8 people).
  • A whiteboard or a flip-chart or a large sheet of paper on a table or wall.
  • Black marker pens for each participant (markers should be whiteboard-safe if you choose this kind of surface).
  • Optionally: Thinkpak cards by Michael Michalko (1 pack per team).

Process

This brainstorming method employs various ways to modify an object. It’s aimed at activating the inventory thinking and helps to optimize an existing product or create a brand new thing.


Samples of slides for conducting a brainstorming workshop employing SCAMPER method
(Large preview)

Each letter in the acronym represents a certain transformation you can apply to the subject of brainstorming.

S  Substitute
C Combine
A Adapt
M Modify
P Put to other uses
E Eliminate
R Rearrange/Reverse

It’s necessary to illustrate each step with an example and ask participants to generate a couple of ideas themselves for the sake of training. As a result, you’ll be sure they won’t get stuck.

We explained the mechanism by giving sample ideas for improving such an ordinary object as a ballpoint pen.

  • Substitute the ink with something edible.
  • Combine the body and the grip so that they are one piece.
  • Adapt a knife for “writing” on wood like a pen.
  • Modify the body so that it becomes flexible — for wearing as a bracelet.
  • Use a pen as a hairpin or arrow for darts.
  • Eliminate the clip and use a magnet instead.
  • Reverse the clip. As a result, the nib will be oriented up, and the pen won’t spill in a pocket.

Samples of slides for conducting a brainstorming workshop employing SCAMPER method
(Large preview)

After the audience doesn’t have questions left, you can start. First of all, team members agree on the subject formulation. Then they draw a canvas on a whiteboard or large paper sheet.

Once a team sees one of the SCAMPER letters on the screen, they start generating ideas using the corresponding method: substitute, combine, adapt, modify, and so on. They write the ideas down and stick the notes into corresponding canvas columns.

The questions on the slides remind what each step means and help to get in a creative mood. Time limitation helps to concentrate and not to dive into discussions.


Samples of slides for conducting a brainstorming workshop employing SCAMPER method
(Large preview)

Affinity sorting — the last step — is our designers’ contribution to the original technique. It pushes the team to start implementation. Otherwise, people quickly forget all valuable findings and return to the usual state of things. Just imagine how discouraging it will be if the results of a two-hour ideation session are put on the back burner.


Samples of slides for conducting a brainstorming workshop employing SCAMPER method
(Large preview)

Thinkpak Cards

It’s a set of brainstorming cards created by Michael Michalko. Thinkpak makes a session more exciting through gamification. Each card represents a certain letter from SCAMPER. Participants shuffle the pack, take cards in turn and come up with corresponding ideas about an object. It’s fun to compete in the number of ideas each participant generates for a given card within a limited time, for instance, three or five minutes.

My friends and I have tried brainstorming both with and without a Thinkpak; it works both ways. Cards are great for training inventory thinking. If your team has never participated in brainstorming sessions, it’ll be great to play the cards first and then switch to a business subject.

Lessons Learned

  1. Dry run.
    People often become disappointed in brainstorming if the first session they participate in fails. Some people I worked with have a prejudice towards creativity and consider it the waste of time or something not proven scientifically. Fortunately, we tried all the techniques internally — in the design team. As a result, all the actual brainstorming sessions went well. Moreover, our confidence helped others to believe in the power of brainstorming exercises.
  2. Relevant topic and audience.
    Brainstorming can fail if you invite people who don’t have a relevant background or the power and willing to change anything. Once I asked a team of design juniors to ideate about improving the process of selling design services to clients. They lacked the experience and couldn’t generate plenty of ideas. Fortunately, it was a training session, and we easily changed the topic.
  3. Documenting outcomes.
    So, the session is over. Participants go home or return to their workplaces. Almost surely the next morning they will recall not a single thing. I recommend creating a wrap-up document with photos and digitized canvases. The quicker you write and share it, the higher the chances will be that the ideas are actually implemented.

Further Resources

Smashing Editorial
(dm, ra, il)

Source: Smashing Magazine, Organizing Brainstorming Workshops: A Designer’s Guide