Collective #492

Inspirational Website of the Week: Florian Monfrini Smooth animations on scroll and modern typographic elements made us chose Florian Monfrini’s website as inspiration for this week. Get inspired Our Sponsor Meet The #1 WordPress Popup Plugin Build Popups in the same place you build Read more

I Used The Web For A Day Using A Screen Reader

dreamt up by webguru in Uncategorized | Comments Off on I Used The Web For A Day Using A Screen Reader

I Used The Web For A Day Using A Screen Reader

I Used The Web For A Day Using A Screen Reader

Chris Ashton



This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs. Last time, I navigated the web for a day with just my keyboard. This time around, I’m avoiding the screen and am using the web with a screen reader.

What Is A Screen Reader?

A screen reader is a software application that interprets things on the screen (text, images, links, and so on) and converts these to a format that visually impaired people are able to consume and interact with. Two-thirds of screen reader users choose speech as their screen reader output, and one-third of screen reader users choose braille.

Screen readers can be used with programs such as word processors, email clients, and web browsers. They work by mapping the contents and interface of the application to an accessibility tree that can then be read by the screen reader. Some screen readers have to manually map specific programs to the tree, whereas others are more generic and should work with most programs.

Accessibility Originates With UX

You need to ensure that your products are inclusive and usable for disabled people. A BBC iPlayer case study, by Henny Swan. Read article →


Chart showing popularity of desktop screen readers ranks JAWS first, NVDA second and VoiceOver third.
Pie chart from the Screen Reader Survey 2017, showing that JAWS, NVDA and VoiceOver are the most used screen readers on desktop. (Large preview)

On Windows, the most popular screen reader is JAWS, with almost half of the overall screen reader market. It is commercial software, costing around a thousand dollars for the home edition. An open-source alternative for Windows is NVDA, which is used by just under a third of all screen reader users on desktop.

There are other alternatives, including Microsoft Narrator, System Access, Window-Eyes and ZoomText (not a full-screen reader, but a screen magnifier that has reading abilities); the combined sum of these equates to about 6% of screen reader usage. On Linux, Orca is bundled by default on a number of distributions.

The screen reader bundled into macOS, iOS and tvOS is VoiceOver. VoiceOver makes up 11.7% of desktop screen reader users and rises to 69% of screen reader users on mobile. The other major screen readers in the mobile space are Talkback on Android (29.5%) and Voice Assistant on Samsung (5.2%), which is itself based on Talkback, but with additional gestures.


Table showing popularity of mobile screen readers. Ranks VoiceOver first, Talkback second, Voice Assistant third.
Popularity of mobile screen readers: Ranks VoiceOver first, Talkback second, Voice Assistant third. (Large preview)

I have a MacBook and an iPhone, so will be using VoiceOver and Safari for this article. Safari is the recommended browser to use with VoiceOver, since both are maintained by Apple and should work well together. Using VoiceOver with a different browser can lead to unexpected behaviors.

How To Enable And Use Your Screen Reader

My instructions are for VoiceOver, but there should be equivalent commands for your screen reader of choice.

VoiceOver On Desktop

If you’ve never used a screen reader before, it can be a daunting experience. It’s a major culture shock going to an auditory-only experience, and not knowing how to control the onslaught of noise is unnerving. For this reason, the first thing you’ll want to learn is how to turn it off.

The shortcut for turning VoiceOver off is the same as the shortcut for turning it on: + F5 ( is also known as the Cmd key). On newer Macs with a touch bar, the shortcut is to hold the command key and triple-press the Touch ID button. Is VoiceOver speaking too fast? Open VoiceOver Utility, hit the ‘Speech’ tab, and adjust the rate accordingly.

Once you’ve mastered turning it on and off, you’ll need to learn to use the “VoiceOver key” (which is actually two keys pressed at the same time): Ctrl and (the latter key is also known as “Option” or the Alt key). Using the VO key in combination with other keys, you can navigate the web.

For example, you can use VO + A to read out the web page from the current position; in practice, this means holding Ctrl + + A. Remembering what VO corresponds to is confusing at first, but the VO notation is for brevity and consistency. It is possible to configure the VO key to be something else, so it makes sense to have a standard notation that everyone can follow.

You may use VO and arrow keys (VO + and VO + ) to go through each element in the DOM in sequence. When you come across a link, you can use VO + Space to click it — you’ll use these keys to interact with form elements too.

Huzzah! You now know enough about VoiceOver to navigate the web.

VoiceOver On Mobile

The mobile/tablet shortcut for turning on VoiceOver varies according to the device, but is generally a ‘triple click’ of the home button (after enabling the shortcut in settings).

You can read everything from the current position with a Two-Finger Swipe Down command, and you can select each element in the DOM in sequence with a Swipe Right or Left.

You now know as much about iOS VoiceOver as you do desktop!

Think about how you use the web as a sighted user. Do you read every word carefully, in sequence, from top to bottom? No. Humans are lazy by design and have learned to ‘scan’ pages for interesting information as fast as possible.

Screen reader users have this same need for efficiency, so most will navigate the page by content type, e.g. headings, links, or form controls. One way to do this is to open the shortcuts menu with VO + U, navigate to the content type you want with the and arrow keys, then navigate through those elements with the ↑↓ keys.


screenshot of 'Practice Webpage Navigation' VoiceOver tutoriasl screen
(Large preview)

Another way to do this is to enable ‘Quick Nav’ (by holding along with at the same time). With Quick Nav enabled, you can select the content type by holding the arrow alongside or . On iOS, you do this with a Two-Finger Rotate gesture.


screenshot of rota in VoiceOver, currently on 'Headings'
Setting the rotor item type using keyboard shortcuts. (Large preview)

Once you’ve selected your content type, you can skip through each rotor item with the ↑↓ keys (or Swipe Up or Down on iOS). If that feels like a lot to remember, it’s worth bookmarking this super handy VoiceOver cheatsheet for reference.

A third way of navigating via content types is to use trackpad gestures. This brings the experience closer to how you might use VoiceOver on iOS on an iPad/iPhone, which means having to remember only one set of screen reader commands!


screenshot of ‘Practice Trackpad Gestures’ VoiceOver tutorial screen
(Large preview)

You can practice the gesture-based navigation and many other VoiceOver techniques in the built-in training program on OSX. You can access it through System Preferences → Accessibility → VoiceOver → Open VoiceOver Training.

After completing the tutorial, I was raring to go!

Case Study 1: YouTube

Searching On YouTube

I navigated to the YouTube homepage in the Safari toolbar, upon which VoiceOver told me to “step in” to the web content with Ctrl + + Shift + . I’d soon get used to stepping into web content, as the same mechanism applies for embedded content and some form controls.

Using Quick Nav, I was able to navigate via form controls to easily skip to the search section at the top of the page.


screenshot of YouTube homepage
When focused on the search field, VoiceOver announced: ‘Search, search text field Search’. (Large preview)

I searched for some quality content:


Screenshot of 'impractical jokers' in input field
Who doesn’t love Impractical Jokers? (Large preview)

And I navigated to the search button:


VoiceOver announces “Press Search, button.”
VoiceOver announces “Press Search, button.” (Large preview)

However, when I activated the button with VO + Space, nothing was announced.

I opened my eyes and the search had happened and the page had populated with results, but I had no way of knowing through audio alone.

Puzzled, I reproduced my actions with devtools open, and kept an eye on the network tab.

As suspected, YouTube is making use of a performance technique called “client-side rendering” which means that JavaScript intercepts the form submission and renders the search results in-place, to avoid having to repaint the entire page. Had the search results loaded in a new page like a normal link, VoiceOver would have announced the new page for me to navigate.

There are entire articles dedicated to accessibility for client-rendered applications; in this case, I would recommend YouTube implements an aria-live region which would announce when the search submission is successful.

Tip #1: Use aria-live regions to announce client-side changes to the DOM.

<form id="search-form"> <label> <span class="off-screen">Search for a video</span> <input type="text" /> </label> <input type="submit" value="Search" /> </form> document.getElementById('search-form').addEventListener('submit', function (e) { e.preventDefault(); ajaxSearchResults(); // not defined here, for brevity document.getElementById('search-status').textContent = 'Search submitted. Navigate to results below.'; // announce to screen reader });

Now that I’d cheated and knew there were search results to look at, I closed my eyes and navigated to the first video of the results, by switching to Quick Nav’s “headings” mode and then stepping through the results from there.

Playing Video On YouTube

As soon as you load a YouTube video page, the video autoplays. This is something I value in everyday usage, but this was a painful experience when mixed with VoiceOver talking over it. I couldn’t find a way of disabling the autoplay for subsequent videos. All I could really do was load my next video and quickly hit CTRL to stop the screen reader announcements.

Tip #2: Always provide a way to suppress autoplay, and remember the user’s choice.

The video itself is treated as a “group” you have to step into to interact with. I could navigate each of the options in the video player, which I was pleasantly surprised by — I doubt that was the case back in the days of Flash!

However, I found that some of the controls in the player had no label, so ‘Cinema mode’ was simply read out as “button”.


screenshot of YouTube player
Focussing on the ‘Cinema Mode’ button, there was no label indicating its purpose. (Large preview)

Tip #3: Always label your form controls.

Whilst screen reader users are predominantly blind, about 20% are classed as “low vision”, so can see some of the page. Therefore, a screen reader user may still appreciate being able to activate “Cinema mode”.

These tips aren’t listed in order of importance, but if they were, this would be my number one:

Tip #4: Screen reader users should have functional parity with sighted users.

By neglecting to label the “cinema mode” option, we’re excluding screen reader users from a feature they might otherwise use.

That said, there are cases where a feature won’t be applicable to a screen reader — for example, a detailed SVG line chart which would read as a gobbledygook of contextless numbers. In cases such as these, we can apply the special aria-hidden="true" attribute to the element so that it is ignored by screen readers altogether. Note that we would still need to provide some off-screen alternative text or data table as a fallback.

Tip #5: Use aria-hidden to hide content that is not applicable to screen reader users.

It took me a long time to figure out how to adjust the playback position so that I could rewind some content. Once you’ve “stepped in” to the slider (VO + Shift + ), you hold + ↑↓ to adjust. It seems unintuitive to me but then again it’s not the first time Apple have made some controversial keyboard shortcut decisions.

Autoplay At End Of YouTube Video

At the end of the video I was automatically redirected to a new video, which was confusing — no announcement happened.


screenshot of autoplay screen on YouTube video
There’s a visual cue at the end of the video that the next video will begin shortly. A cancel button is provided, but users may not trigger it in time (if they know it exists at all!) (Large preview)

I soon learned to navigate to the Autoplay controls and disable them:


In-video autoplay disable
In-video autoplay disable. (Large preview)

This doesn’t prevent a video from autoplaying when I load a video page, but it does prevent that video page from auto-redirecting to the next video.

Case Study 2: BBC

As news is something consumed passively rather than by searching for something specific, I decided to navigate BBC News by headings. It’s worth noting that you don’t need to use Quick Nav for this: VoiceOver provides element search commands that can save time for the power user. In this case, I could navigate headings with the VO + + H keys.

The first heading was the cookie notice, and the second heading was a <h2> entitled ‘Accessibility links’. Under that second heading, the first link was a “Skip to content” link which enabled me to skip past all of the other navigation.


“Skip to content” link is accessible via keyboard tab and/or screen reader navigation.
“Skip to content” link is accessible via keyboard tab and/or screen reader navigation. (Large preview)

‘Skip to content’ links are very useful, and not just for screen reader users; see my previous article “I used the web for a day with just a keyboard”.

Tip #6: Provide ‘skip to content’ links for your keyboard and screen reader users.

Navigating by headings was a good approach: each news item has its own heading, so I could hear the headline before deciding whether to read more about a given story. And as the heading itself was wrapped inside an anchor tag, I didn’t even have to switch navigation modes when I wanted to click; I could just VO + Space to load my current article choice.


Headings are also links on the BBC
Headings are also links on the BBC. (Large preview)

Whereas the homepage skip-to-content shortcut linked nicely to a #skip-to-content-link-target anchor (which then read out the top news story headline), the article page skip link was broken. It linked to a different ID (#page) which took me to the group surrounding the article content, rather than reading out the headline.


“Press visited, link: Skip to content, group” &mdash; not the most useful skip link result.
“Press visited, link: Skip to content, group” — not the most useful skip link result. (Large preview)

At this point, I hit VO + A to have VoiceOver read out the entire article to me.

It coped pretty well until it hit the Twitter embed, where it started to get quite verbose. At one point, it unhelpfully read out “Link: 1068987739478130688”.


VoiceOver can read out long links with no context.
VoiceOver can read out long links with no context. (Large preview)

This appears to be down to some slightly dodgy markup in the video embed portion of the tweet:


We have an anchor tag, then a nested div, then an img with an <code>alt</code> attribute with the value: “Embedded video”.
We have an anchor tag, then a nested div, then an img with an alt attribute with the value: “Embedded video”. (Large preview)

It appears that VoiceOver doesn’t read out the alt attribute of the nested image, and there is no other text inside the anchor, so VoiceOver does the most useful thing it knows how: to read out a portion of the URL itself.

Other screen readers may work fine with this markup — your mileage may vary. But a safer implementation would be the anchor tag having an aria-label, or some off-screen visually hidden text, to carry the alternative text. Whilst we’re here, I’d probably change “Embedded video” to something a bit more helpful, e.g. “Embedded video: click to play”).

The link troubles weren’t over there:


One link simply reads out “Link: 1,887”.
One link simply reads out “Link: 1,887”. (Large preview)

Under the main tweet content, there is a ‘like’ button which doubles up as a ‘likes’ counter. Visually it makes sense, but from a screen reader perspective, there’s no context here. This screen reader experience is bad for two reasons:

  1. I don’t know what the “1,887” means.
  2. I don’t know that by clicking the link, I’ll be liking the tweet.

Screen reader users should be given more context, e.g. “1,887 users liked this tweet. Click to like.” This could be achieved with some considerate off-screen text:

<style>
  .off-screen {
    clip: rect(0 0 0 0);
    clip-path: inset(100%);
    height: 1px;
    overflow: hidden;
    position: absolute;
    white-space: nowrap;
    width: 1px;
  }
</style>

<a href="/tweets/123/like">
  <span class="off-screen">1,887 users like this tweet. Click to like</span>
  <span aria-hidden="true">1,887</span>
</a>

Tip #7: Ensure that every link makes sense when read in isolation.

I read a few more articles on the BBC, including a feature ‘long form’ piece.

Reading The Longer Articles

Look at the following screenshot from another BBC long-form article — how many different images can you see, and what should their alt attributes be?


Screenshot of BBC article containing logo, background image and foreground image (with caption).
Screenshot of BBC article containing logo, background image, and foreground image (with caption). (Large preview)

Firstly, let’s look at the foreground image of Lake Havasu in the center of the picture. It has a caption below it: “Lake Havasu was created after the completion of the Parker Dam in 1938, which held back the Colorado River”.

It’s best practice to provide an alt attribute even if a caption is provided. The alt text should describe the image, whereas the caption should provide the context. In this case, the alt attribute might be something like “Aerial view of Lake Havasu on a sunny day.”

Note that we shouldn’t prefix our alt text with “Image: ”, or “Picture of” or anything like that. Screen readers already provide that context by announcing the word “image” before our alt text. Also, keep alt text short (under 16 words). If a longer alt text is needed, e.g. an image has a lot of text on it that needs copying, look into the longdesc attribute.

Tip #8: Write descriptive but efficient alt texts.

Semantically, the screenshot example should be marked up with <figure> and <figcaption> elements:

<figure>
  <img src="https//www.smashingmagazine.com/havasu.jpg" alt="Aerial view of Lake Havasu on a sunny day" />
  <figcaption>Lake Havasu was created after the completion of the Parker Dam in 1938, which held back the Colorado River</figcaption>
</figure>

Now let’s look at the background image in that screenshot (the one conveying various drinking glasses and equipment). As a general rule, background or presentational images such as these should have an empty alt attribute (alt=""), so that VoiceOver is explicitly told there is no alternative text and it doesn’t attempt to read it.

Note that an empty alt="" is NOT the same as having no alt attribute, which is a big no-no. If an alt attribute is missing, screen readers will read out the image filenames instead, which are often not very useful!


screenshot from BBC article
My screen reader read out ‘pushbutton-mr_sjdxzwy.jpg, image’ because no `alt` attribute was provided. (Large preview)

Tip #9: Don’t be afraid to use empty alt attributes for presentational content.

Case Study 3: Facebook

Heading over to Facebook now, and I was having withdrawal symptoms from earlier, so went searching for some more Impractical Jokers.

Facebook takes things a step or two further than the other sites I’ve tried so far, and instead of a ‘Skip to content’ link, we have no less than two dropdowns that link to pages or sections of pages respectively.


Facebook offers plenty of skip link keyboard shortcuts.
Facebook offers plenty of skip link keyboard shortcuts. (Large preview)

Facebook also defines a number of keys as shortcut keys that can be used from anywhere in the page:


Keyboard shortcuts for scrolling between news feed items, making new posts, etc.
Keyboard shortcuts for scrolling between news feed items, making new posts, etc. (Large preview)

I had a play with these, and they work quite well with VoiceOver — once you know they’re there. The only problem I see is that they’re proprietary (I can’t expect these same shortcuts to work outside of Facebook), but it’s nice that Facebook is really trying hard here.

Whilst my first impression of Facebook accessibility was a good one, I soon spotted little oddities that made the site harder to navigate.

For example, I got very confused when trying to navigate this page via headings:


The “Pages Liked by This Page” heading (at the bottom right of the page) is in focus, and is a “heading level 3”.
The “Pages Liked by This Page” heading (at the bottom right of the page) is in focus, and is a “heading level 3”. (Large preview)

The very first heading in the page is a heading level 3, tucked away in the sidebar. This is immediately followed by heading level SIX in the main content column, which corresponds to a status that was shared by the Page.


‘Heading level 6’ on a status shared to the Page.
‘Heading level 6’ on a status shared to the Page. (Large preview)

This can be visualized with the Web Developer plugin for Chrome/Firefox.


h1 goes to multiple h6s, skipping h2/h3/h4/h5
h1 goes to multiple h6s, skipping h2, h3, h4, h5. (Large preview)

As a general rule, it’s a good idea to have sequential headings with a difference no higher than 1. It’s not a deal-breaker if you don’t, but it’s certainly confusing coming to it from a screen reader perspective and worrying that you’ve accidentally skipped some important information because you jumped from a h1 to a h6.

Tip #10: Validate your heading structure.

Now, onto the meat of the website: the posts. Facebook is all about staying in touch with people and seeing what they’re up to. But we live in a world where alt text is an unknown concept to most users, so how does Facebook translate those smug selfies and dog pictures to a screen reader audience?

Facebook has an Automatic Alt Text generator which uses object recognition technology to analyze what (or who) is in a photo and generate a textual description of it. So, how well does it work?


Cambridge Cathedral
How do you think this image fared with the Automatic Alt Text Generator? (Large preview)

The alt text for this image was “Image may contain: sky, grass and outdoor.” It’s a long way off recognizing “Cambridge Cathedral at dusk”, but it’s definitely a step in the right direction.

I was incredibly impressed with the accuracy of some descriptions. Another image I tried came out as “Image may contain: 3 people, including John Smith, Jane Doe and Chris Ashton, people smiling, closeup and indoor” — very descriptive, and absolutely right!

But it does bother me that memes and jokes that go viral on social media are inherently inaccessible; Facebook treats the following as “Image may contain: bird and text”, which whilst true is a long way off the true depiction!


Scientifically, a raven has 17 primary wing feathers, the big ones at the end of the wing. They are called pinion feathers. A crow has 16. So, the difference between a crown and a raven is only a matter of a pinion.
Sadly, Facebook’s alt text does not stretch to images-with-text-on. (Large preview)

Luckily, users can write their own alt text if they wish.

Case Study 4: Amazon

Something I noticed on Facebook, happens on Amazon, too. The search button appears before the search input field in the DOM. That’s despite the fact that the button appears after the input field visually.


Screenshot of Chrome inspector against Amazon search area
The ‘nav-fill’ text input appears lower in the DOM than the search button. (Large preview)

Your website is likely to be in a logical order visually. What if somebody randomly moved parts of your webpage around — would it continue to make sense?

Probably not. That’s what can happen to your screen reader experience if you aren’t disciplined about keeping your DOM structure in sync with your visual design. Sometimes it’s easier to move content with CSS, but it’s usually better to move it in the DOM.

Tip #11: Make the DOM order match the visual order.

Why these two high profile sites choose not to adopt this best practice guideline with their search navigation baffles me. However, the button and input text are not so far apart that their ordering causes a big accessibility issue.

Headings On Amazon

Again, like Facebook, Amazon has a strange headings order. I searched via headings and was most confused that the first heading in the page was a heading level 5 in the “Other Sellers on Amazon” section:


Screenshot of Amazon product page with VoiceOver overlay
‘First heading, heading level 5, Other Sellers on Amazon’. (Large preview)

I thought this must be a bug with the screen reader, so I dug into Amazon’s source code to check:


screenshot of source code
The h5 ‘Other Sellers on Amazon’ appears on line 7730 in the page source. It is the first heading in the page. (Large preview)

The h1 of the page appears almost 10,000 lines down in the source code.


screenshot of source code
The ‘Red Dead Redemption 2 PS4’ h1 appears on line 9054. (Large preview)

Not only is this poor semantically and poor for accessibility, but this is also poor for SEO. Poor SEO means fewer conversions (sales) — something I’d expect Amazon to be very on top of!

Tip #12: Accessibility and SEO are two sides of the same coin.

A lot of what we do to improve the screen reader experience will also improve the SEO. Semantically valid headings and detailed alt text are great for search engine crawlers, which should mean your site ranks more highly in search, which should mean you’ll bring in a wider audience.

If you’re ever struggling to convince your business manager that creating accessible sites is important, try a different angle and point out the SEO benefits instead.

Miscellaneous

It’s hard to condense a day’s worth of browsing and experiences into a single article. Here are some highlights and lowlights that made the cut.

You’ll Notice The Slow Sites

Screen readers cannot parse the page and create their accessibility tree until the DOM has loaded. Sighted users can scan a page while it’s loading, quickly determining if it’s worth their while and hitting the back button if not. Screen reader users have no choice but to wait for 100% of the page to load.


Screenshot of a website, with '87 percent loaded' in VoiceOver overlay
87 percent loaded. I can’t navigate until it’s finished. (Large preview)

It’s interesting to note that whilst making a performant website benefits all, it’s especially beneficial for screen reader users.

Do I Agree To What?

Form controls like this one from NatWest can be highly dependent on spacial closeness to denote relationships. In screen reader land, there is no spacial closeness — only siblings and parents — and guesswork is required to know what you’re ticking ‘yes’ to.


Screenshot of web form, ‘Tick to confirm you have read this’
Navigating by form controls, I skipped over the ‘Important’ notice and went straight to the ‘Tick to confirm’ checkbox. (Large preview)

I would have known what I was agreeing to if the disclaimer had been part of the label:

<label>
  Important: We can only hold details of one trip at a time.
  <input type="checkbox" /> Tick to confirm you have read this. *
</label>

Following Code Is A Nightmare

I tried reading a technical article on CSS Tricks using my screen reader, but honestly, found the experience totally impossible to follow. This isn’t the fault of the CSS Tricks website — I think it’s incredibly complex to explain technical ideas and code samples in a fully auditory way. How many times have you tried debugging with a partner and rather than explaining the exact syntax you need, you give them something to copy and paste or you fill it in yourself?

Look how easily you can read this code sample from the article:


Sample of code from CSS Tricks
(Large preview)

But here is the screen reader version:

slash slash first we get the viewport height and we multiple it by one [pause] percent to get a value for a vh unit let vh equals window inner height star [pause] zero zero one slash slash then we set the value in the [pause] vh custom property to the root of the document document document element style set property [pause] vh dollar left brace vh right brace px

It’s totally unreadable in the soundscape. We tend not to have punctuation in comments, and in this case, one line flows seamlessly into the next in screen reader land. camelCase text is read out as separate words as if they’d been written in a sentence. Periods such as window.innerHeight are ignored and treated as “window inner height”. The only ‘code’ read out is the curly brackets at the end.

The code is marked up using standard <pre> and <code> HTML elements, so I don’t know how this could be made better for screen reader users. Any who do persevere with coding have my total admiration.

Otherwise, the only fault I could find was that the logo of the site had a link to the homepage, but no alt text, so all I heard was “link: slash”. It’s only in my capacity as a web developer that I know if you have a link with an attribute href="/" then it takes you to the website homepage, so I figured out what the link was for — but “link: CSS Tricks homepage” would have been better!


screenshot showing markup of CSS Tricks website
(Large preview)

VoiceOver On iOS Is Trickier Than OSX

Using VoiceOver on my phone was an experience!

I gave myself the challenge of navigating the Twitter app and writing a Tweet, with the screen off and using the mobile keyboard. It was harder than expected and I made a number of spelling mistakes.

If I were a regular screen reader user, I think I’d have to join the 41% of mobile screen reader users who use an external keyboard and invest in a Bluetooth keyboard. Clara Van Gerven came to the same conclusion when she used a screen reader for forty days in 2015.

It was pretty cool to activate Screen Curtain mode with a triple-tap using three fingers. This turned the screen off but kept the phone unlocked, so I could continue to browse my phone without anyone watching. This feature is essential for blind users who might otherwise be unwittingly giving their passwords to the person watching over their shoulder, but it also has a side benefit of being great for saving the battery.

Summary

This was an interesting and challenging experience, and the hardest article of the series to write so far.

I was taken aback by little things that are obvious when you stop and think about them. For instance, when using a screen reader, it’s almost impossible to listen to music at the same time as browsing the web! Keeping the context of the page can also be difficult, especially if you get interrupted by a phone call or something; by the time you get back to the screen reader you’ve kind of lost your place.

My biggest takeaway is that there’s a big cultural shock in going to an audio-only experience. It’s a totally different way to navigate the web, and because there is such a contrast, it is difficult to even know what constitutes a ‘good’ or ‘bad’ screen reader experience. It can be quite overwhelming, and it’s no wonder a lot of developers avoid testing on them.

But we shouldn’t avoid doing it just because it’s hard. As Charlie Owen said in her talk, Dear Developer, the Web Isn’t About You: This. Is. Your. Job. Whilst it’s fun to build beautiful, responsive web applications with all the latest cutting-edge technologies, we can’t just pick and choose what we want to do and neglect other areas. We are the ones at the coal face. We are the only people in the organization capable of providing a good experience for these users. What we choose to prioritize working on today might mean the difference between a person being able to use our site, and them not being able to.

Let us do our jobs responsibly, and let’s make life a little easier for ourselves, with my last tip of the article:

Tip #13: Test on a screen reader, little and often.

I’ve tested on screen readers before, yet I was very ropey trying to remember my way around, which made the day more difficult than it needed to be. I’d have been much more comfortable using a screen reader for the day if I had been regularly using one beforehand, even for just a few minutes per week.

Test a little, test often, and ideally, test on more than one screen reader. Every screen reader is different and will read content out in different ways. Not every screen reader will read “23/10/18” as a date; some will read out “two three slash one zero slash one eight.” Get to know the difference between application bugs and screen reader quirks, by exposing yourself to both.

Did you enjoy this article? This was the third one in a series; read how I Used The Web For A Day With JavaScript Turned Off and how I Used The Web For A Day With Just A Keyboard.

Smashing Editorial
(rb, ra, yk, il)

Source: Smashing Magazine, I Used The Web For A Day Using A Screen Reader

Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD

dreamt up by webguru in Uncategorized | Comments Off on Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD

Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD

Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD

Nick Babich



(This article is kindly sponsored by Adobe.) User interfaces are evolving. Voice-enabled interfaces are challenging the long dominance of graphical user interfaces and are quickly becoming a common part of our daily lives. Significant progress in automatic speech recognition (APS) and natural language processing (NLP), together with an impressive consumer base (millions of mobile devices with built-in voice assistants), have influenced the rapid development and adoption of voice-based interface.

Products that use voice as the primary interface are becoming more and more popular. In the US alone, 47.3 million adults have access to a smart speaker (that’s one fifth of the US adult population), and the number is growing. But voice interfaces have a bright future not only in personal and home use. As people become accustomed to voice interfaces, they will come to expect them in a business context as well. Just imagine that soon you’ll be able to trigger a conference-room projector by saying something like, “Show my presentation”.

It’s evident that human-machine communication is rapidly expanding to encompass both written and spoken interaction. But does it mean that future interfaces will be voice-only? Despite some science-fiction portrayals, voice won’t completely replace graphical user interfaces. Instead, we’ll have a synergy of voice, visual and gesture in a new format of interface: a voice-enabled, multimodal interface.

In this article, we’ll:

  • explore the concept of a voice-enabled interface and review different types of voice-enabled interfaces;
  • find out why voice-enabled, multimodal user interfaces will be the preferred user experience;
  • see how you can build a multimodal UI using Adobe XD.

The State Of Voice User Interfaces (VUI)

Before diving into the details of voice user interfaces, we must define what voice input is. Voice input is a human-computer interaction in which a user speaks commands instead of writing them. The beauty of voice input is that it’s a more natural interaction for people — users are not restricted to a specific syntax when interacting with a system; they can structure their input in many different ways, just as they would do in human conversation.

Voice user interfaces bring the following benefits to their users:

  • Less interaction cost
    Although using a voice-enabled interface does involve an interaction cost, this cost is smaller (in theory) than that of learning a new GUI.
  • Hands-free control
    VUIs are great for when the users hands are busy — for example, while driving, cooking or exercising.
  • Speed
    Voice is excellent when asking a question is faster than typing it and reading through the results. For example, when using voice in a car, it is faster to say the place to a navigation system, rather than type the location on a touchscreen.
  • Emotion and personality
    Even when we hear a voice but don’t see an image of a speaker, we can picture the speaker in our head. This has an opportunity to improve user engagement.
  • Accessibility
    Visually impaired users and users with a mobility impairment can use voice to interact with a system.

Three Types Of Voice-Enabled Interfaces

Depending on how voice is used, it could be one of the following types of interfaces.

Voice Agents In Screen-First Devices

Apple Siri and Google Assistant are prime examples of voice agents. For such systems, the voice acts more like an enhancement for the existing GUI. In many cases, the agent acts as the first step in the user’s journey: The user triggers the voice agent and provides a command via voice, while all other interactions are done using the touchscreen. For example, when you ask Siri a question, it will provide answers in the format of a list, and you need to interact with that list. As a result, the user experience becomes fragmented — we use voice to initiate the interaction and then shift to touch to continue it.


Siri executes a voice command to search for news, but then requires users to touch the screen in order to read the items.
Siri executes a voice command to search for news, but then requires users to touch the screen in order to read the items. (Large preview)
Voice-Only Devices

These devices don’t have visual displays; users rely on audio for both input and output. Amazon Echo and Google Home smart speakers are prime examples of products in this category. The lack of a visual display is a significant constraint on the device’s ability to communicate information and options to the user. As a result, most people use these devices to complete simple tasks, such as playing music and getting answers to simple questions.


Amazon Echo Dot is a screen-less device.
Amazon Echo Dot is a screen-less device. (Large preview)
Voice-First Devices

With voice-first systems, the device accepts user input primarily via voice commands, but also has an integrated screen display. It means that voice is the primary user interface, but not the only one. The old saying, “A picture is worth a thousand words” still applies to modern voice-enabled systems. The human ​brain​ has incredible​ ​image​-​processing​ abilities — we​ ​can​ ​understand​ ​complex​ ​information​ ​faster​ ​when we​ ​see​ ​it​ ​visually. Compared to voice-only devices, voice-first devices allow users to access a larger amount of information and make many tasks much easier.

The Amazon Echo Show is a prime example of a device that employs a voice-first system. Visual information is gradually incorporated as part of a holistic system — the screen is not loaded with app icons; rather, the system encourages users to try different voice commands (suggesting verbal commands such as, “Try ‘Alexa, show me the weather at 5:00 pm’”). The screen even makes common tasks such as checking a recipe while cooking much easier — users don’t need to listen carefully and keep all of the information in their heads; when they need the information, they simply look at the screen.


Amazon Echo Show is basically an Amazon Echo speaker with a screen.
Amazon Echo Show is basically an Amazon Echo speaker with a screen. (Large preview)

Introducing Multimodal Interfaces

When it comes to using voice in UI design, don’t think of voice as something you can use alone. Devices such as Amazon Echo Show include a screen but employ voice as the primary input method, making for a more holistic user experience. This is the first step towards a new generation of user interfaces: multimodal interfaces.

A multimodal interface is an interface that blends voice, touch, audio and different types of visuals in a single, seamless UI. Amazon Echo Show is an excellent example of a device that takes full advantage of a voice-enabled multimodal interface. When users interact with Show, they make requests just as they would with a voice-only device; however, the response they receive will likely be multimodal, containing both voice and visual responses.

Multimodal products are more complex than products that rely only on visuals or only on voice. Why should anyone create a multimodal interface in the first place? To answer that question, we need to step back and see how people perceive the environment around them. People have five senses, and the combination of our senses working together is how we perceive things. For example, our senses work together when we are listening to music at a live concert. Remove one sense (for example, hearing), and the experience takes on an entirely different context.


Our senses work together when we are listening to music at a live concert. Remove one sense (for example, hearing), and the experience takes on an entirely different context.
(Large preview)

For too long, we’ve thought about the user experience as exclusively either visual or gestural design. It’s time to change this thinking. Multimodal design is a way to think about and design for experiences that connect our sensory abilities together.

Multimodal interfaces feel like ​a more​ ​human​ ​way​ for ​user​ ​and​ machine to communicate. They open up new opportunities for deeper interactions. And today, it’s much easier to design multimodal interfaces because the technical limitations that in the past constrained interactions with products are being erased.

The Difference Between A GUI And Multimodal Interface

The key difference here is that multimodal interfaces like Amazon Echo Show sync voice and visual interfaces. As a result, when we’re designing the experience, the voice and visuals are no longer independent parts; they are integral parts of the experience that the system provides.

Visual And Voice Channel: When To Use Each

It’s important to think about voice and visuals as channels for input and output. Each channel has its own strengths and weaknesses.

Let’s start with the visuals. It’s clear that some information is just easier to understand when we see it, rather than when we hear it. Visuals work better when you need to provide:

  • a long lists of options (reading a long list will take a lot of time and be difficult to follow);
  • data-heavy information (such as diagrams and graphs);
  • product information (for example, products in online shops; most likely, you would want to see a product before buying) and product comparison (as with the long list of options, it would be hard to provide all of the information using only voice).

For some information, however, we can easily rely on verbal communication. Voice might be the right fit for the following cases:

  • user commands (voice is an efficient input modality, allowing users to give commands to the system quickly and bypassing complex navigation menus);
  • simple user instructions (for example, a routine check on a prescription);
  • warnings and notifications (for example, an audio warning paired with voice notifications during driving).

While these are a few typical cases of visual and voice combined, it’s important to know that we can’t separate the two from each other. We can create a better user experience only when both voice and visuals work together. For example, suppose we want to purchase a new pair of shoes. We could use voice to request from the system, “Show me New Balance shoes.” The system would process your request and visually provide product information (an easier way for us to compare shoes).

What You Need To Know To Design Voice-Enabled, Multimodal Interfaces

Voice is one of the most exciting challenges for UX designers. Despite its novelty, the fundamental rules for designing voice-enabled, multimodal interface are the same as those we use to create visual designs. Designers should care about their users. They should aim to reduce friction for the user by solving their problems in efficient ways and prioritize clarity to make the user’s choices clear.

But there are some unique design principles for multimodal interfaces as well.

Make Sure You Solve The Right Problem

Design should solve problems. But it’s vital to solve the right problems; otherwise, you could spend a lot of time creating an experience that doesn’t bring much value to users. Thus, make sure you’re focused on solving the right problem. Voice interactions should make sense to the user; users should have a compelling reason to use voice over other methods of interaction (such as clicking or tapping). That’s why, when you create a new product — even before starting the design — it’s essential to conduct user research and determine whether voice would improve the UX.

Start with creating a user journey map. Analyze the journey map and find places where including voice as a channel would benefit the UX.

  • Find places in the journey where users might encounter friction and frustration. Would using voice reduce the friction?
  • Think about the context of the user. Would voice work for a particular context?
  • Think about what is uniquely enabled by voice. Remember the unique benefits of using voice, such as hands-free and eyes-free interaction. Could voice add value to the experience?

Create Conversational Flows

Ideally, the interfaces you design should require zero interaction cost: Users should be able to fulfill their needs without spending extra time on learning how to interact with the system. This happens only when voice interaction resemble a real conversation, not a system dialog wrapped in the format of voice commands. The fundamental rule of a good UI is simple: Computers should adapt to humans, not the other way around.

People rarely have flat, linear conversations (conversations that only last one turn). That’s why, to make interaction with a system feel like a live conversation, designers should focus on creating conversational flows. Each conversational flow consists of dialogs — the pathways that occur between the system and the user. Each dialog would include the system’s prompts and the user’s possible responses.

A conversational flow can be presented in the form of a flow diagram. Each flow should focus on one particular use case (for example, setting an alarm clock using a system). For most dialogs in a flow, it’s vital to consider error paths, when things go off the rails.

Each voice command of the user consists of three key elements: intent, utterance and slot.

  • Intent is the objective of the user’s interaction with a voice-enabled system.
    An intent is just a fancy way of defining the purpose behind a set of words. Each interaction with a system brings the user some utility. Whether it’s information or an action, the utility is in intent. Understanding the user’s intent is a crucial part of voice-enabled interfaces. When we design VUI, we don’t always know for sure what a user’s intent is, but we can guess it with high accuracy.
  • Utterance is how the user phrases their request.
    Usually, users have more than one way to formulate a voice command. For example, we can set an alarm clock by saying “Set alarm clock to 8 am”, or “Alarm clock 8 am tomorrow” or even “I need to wake up at 8 am.” Designers need to consider every possible variation of utterance.
  • Slots are variables that users use in a command. Sometimes users need to provide additional information in the request. In our example of the alarm clock, “8 am” is a slot.

Don’t Put Words In The User’s Mouth

People know how to talk. Don’t try to teach them commands. Avoid phrases like, “To send a meeting appointment, you need to say ‘Calendar, meetings, create a new meeting’.” If you have to explain commands, you need to reconsider the way you’re designing the system. Always aim for natural language conversation, and try to accommodate diverse speaking styles).

Strive For Consistency

You need to achieve consistency in language and voice across contexts. Consistency will help to build familiarity in interactions.

Always Provide Feedback

Visibility of system status is one of the fundamental principles of good GUI design. The system should always keep users informed of what is going on through appropriate feedback within a reasonable time. The same rule applies to VUI design.

  • Make the user aware that the system is listening.
    Show visual indicators when the device is listening or processing the user’s request. Without feedback, the user can only guess whether the system is doing something. That’s why even voice-only devices such as Amazon Echo and Google Home give us nice visual feedback (flashing lights) when they are listening or searching for an answer.
  • Provide conversational markers.
    Conversational markers tell the user where they’re at in the conversation.
  • Confirm when a task is completed.
    For example, when users ask the voice-enabled smart home system “Turn off the lights in the garage”, the system should let the user know that the command has been successfully executed. Without confirmation, users will need to walk into the garage and check the lights. It defeats the purpose of the smart home system, which is to make the user’s life easier.

Avoid Long Sentences

When designing a voice-enabled system, consider the way you provide information to users. It’s relatively easy to overwhelm users with too much information when you use long sentences. First, users can’t retain a lot of information in their short-term memory, so they can easily forget some important information. Also, audio is a slow medium — most people can read much faster than they can listen.

Be respectful of your user’s time; don’t read out long audio monologues. When you’re designing a response, the fewer words you use, the better. But remember that you still need to provide enough information for the user to complete their task. Thus, if you cannot summarize an answer in a few words, display it on the screen instead.

Provide Next Steps Sequentially

Users can be overwhelmed not only by long sentences, but also their number of options at one time. It’s vital to break down the process of interaction with a voice-enabled system into bite-sized chunks. Limit the number of choices the user has at any one time, and make sure they know what to do at every moment.

When designing a complex voice-enabled system with a lot of features, you can use the technique of progressive disclosure: Present only the options or information necessary to complete the task.

Have A Strong Error-Handling Strategy

Of course, the system should prevent errors from occurring in the first place. But no matter how good your voice-enabled system is, you should always design for the scenario in which the system doesn’t understand the user. Your responsibility is to design for such cases.

Here are a few practical tips for creating a strategy:

  • Don’t blame the user.
    In conversation, there are no errors. Try to avoid reponses like, “Your answer is incorrect.”
  • Provide error-recovery flows.
    Provide an option for back-and-forths in a conversation, or even to exit the system, without losing important information. Save the user’s state in the journey, so that they can re-engage with the system right from where they left off.
  • Let users replay information.
    Provide an option to make the system repeat the question or answer. This might be helpful for complex questions or answers where it would be hard for the user to commit all of the information to their working memory.
  • Provide stop wording.
    In some cases, the user will not be interested in listening to an option and will want the system to stop talking about it. Stop wording should help them do just that.
  • Handle unexpected utterances gracefully.
    No matter how much you invest in the design of a system, there will be situations when the system doesn’t understand the user. It’s vital to handle such cases gracefully. Don’t be afraid to let the system admit a lack of understanding. The system should communicate what it has understood and provide helpful reprompts.
  • Use analytics to improve your error strategy.
    Analytics can help you identify wrong turns and misinterpretations.

Keep Track Of Context

Make sure the system understands the context of the user’s input. For example, when someone says that they want to book a flight to San Francisco next week, they might refer to “it” or “the city” during the conversational flow. The system should remember what was said and be able to match it to the newly received information.

Learn About Your Users To Create More Powerful Interactions

A voice-enabled system becomes more sophisticated when it uses additional information (such as user context or past behavior) to understand what the user wants. This technique is called intelligent interpretation, and it requires that the system actively learn about the user and be able to adjust their behavior accordingly. This knowledge will help the system to provide answers even to complex questions, such as “What gift should I buy for my wife’s birthday?”

Give Your VUI A Personality

Every voice-enabled system has an emotional impact on the user, whether you plan for it or not. People associate voice with humans rather than machines. According to Speak Easy Global Edition research, 74% of regular users of voice technology expect brands to have unique voices and personalities for their voice-enabled products. It’s possible to build empathy through personality and achieve a higher level of user engagement.

Try to reflect your unique brand and identity in the voice and tone you present. Construct a persona of your voice-enabled agent, and rely on this persona when creating dialogs.

Build Trust

When users don’t trust a system, they don’t have the motivation to use it. That’s why building trust is a requirement of product design. Two factors have a significant impact on the level of trust built: system capabilities and valid outcome.

Building trust starts with setting user expectations. Traditional GUIs have a lot of visual details to help the user understand what the system is capable of. With a voice-enabled system, designers have fewer tools to rely on. Still, it’s vital to make the system naturally discoverable; the user should understand what is and isn’t possible with the system. That’s why a voice-enabled system might require user onboarding, where it talks about what the system can do or what it knows. When designing onboarding, try to offer meaningful examples to let people know what it can do (examples work better than instructions).

When it comes to valid outcomes, people know that voice-enabled systems are imperfect. When a system provides an answer, some users might doubt that the answer is correct. this happens because users don’t have any information about whether their request was correctly understood or what algorithm was used to find the answer. To prevent trust issues, use the screen for supporting evidence — display the original query on the screen — and provide some key information about the algorithm. For example, when a user asks, “Show me the top five movies of 2018”, the system can say, “Here are top five movies of 2018 according to the box office in the US”.

Don’t Ignore Security And Data Privacy

Unlike mobile devices, which belong to the individual, voice devices tend to belong to a location, like a kitchen. And usually, there are more than one person in the same location. Just imagine that someone else can interact with a system that has access to all of your personal data. Some VUI systems such as Amazon Alexa, Google Assistant and Apple Siri can recognize individual voices, which adds a layer of security to the system. Still, it doesn’t guarantee that the system will be able to recognize users based on their unique voice signature in 100% of cases.

Voice recognition is continually improving, and it will be hard or nearly impossible to imitate a voice in the near future. However, in the current reality, it’s vital to provide an additional authentication layer to reassure the user that their data is safe. If you design an app that works with sensitive data, such as health information or banking details, you might want to include an extra authentication step, such as a password or fingerprint or face recognition.

Conduct Usability Testing

Usability testing is a mandatory requirement for any system. Test early, test often should be a fundamental rule of your design process. Gather user research data early on, and iterate your designs. But testing multimodal interfaces has its own specifics. Here are two phases that should be taken into account:

  • Ideation phase
    Test drive your sample dialogs. Practice reading sample dialogs out loud. Once you have some conversational flows, record both sides of the conversation (the user’s utterances and the system’s responses), and listen to the recording to understand whether they sound natural.
  • Early stages of product development (testing with lo-fi prototypes)
    Wizard of Oz testing is well-suited to testing conversational interfaces. Wizard of Oz testing is a type of testing in which a participant interacts with a system that they believe is operated by a computer but is in fact operated by a human. The test participant formulates a query, and a real person responds on the other end. This method gets its name from the book The Wonderful Wizard of Oz by Frank Baum. In the book, an ordinary man hides behind a curtain, pretending to be a powerful wizard. This test allows you to map out every possible scenario of interaction and, as a result, create more natural interactions. Say Wizard is a great tool to help you run a Wizard of Oz voice-interface test on macOS.
  • Designing For Voice: The ‘Wizard Of Oz’ Method (Watch on Vimeo)
  • Later stages of product development (testing with hi-fi prototypes)
    In usability testing of graphical user interfaces, we often ask users to speak out loud when they interact with a system. For a voice-enabled system, that’s not always possible because the system would be listening to that narration. So, it might be better to observe the user’s interactions with the system, rather than ask them to speak out loud.

How To Create A Multimodal Interface Using Adobe XD

Now that you have a solid understanding of what a multimodal interface is and what rules to remember when designing them, we can discuss how to make a prototype of a multimodal interface.

Prototyping is a fundamental part of the design process. Being able to bring an idea to life and share it with others is extremely important. Until now, designers who wanted to incorporate voice in prototyping had few tools to rely on, the most powerful of which was a flowchart. Picturing how a user would interact with a system required a lot of imagination from someone looking at the flowchart. With Adobe XD, designers now have access to the medium of voice and can use it in their prototypes. XD seamlessly connects screen and voice prototyping in one app.

New Experiences, Same Process

Even though voice is a totally different medium than visual, the process of prototyping for voice in Adobe XD is pretty much the same as prototyping for a GUI. The Adobe XD team integrates voice in a way that will feel natural and intuitive for any designer. Designers can use voice triggers and speech playback to interact with prototypes:

  • Voice triggers start an interaction when a user says a particular word or phrase (utterance).
  • Speech playback gives designers access to a text-to-speech engine. XD will speak words and sentences defined by a designer. Speech playback can be used for many different purposes. For example, it can act as an acknowledgment (to reassure users) or as guidance (so users know what to do next).

The great thing about XD is that it doesn’t force you to learn the complexities of each voice platform.

Enough words — let’s see how it works in action. For all of the examples you’ll see below, I’ve used artboards created using Adobe XD UI kit for Amazon Alexa (this is a link to download the kit). The kit contains all of the styles and components needed to create experiences for Amazon Alexa.

Suppose we have the following artboards:


example of an artboard
(Large preview)

Let’s go into prototyping mode to add in some voice interactions. We’ll start with voice triggers. Along with triggers such as tap and drag, we are now able to use voice as a trigger. We can use any layers for voice triggers as long as they have a handle leading to another artboard. Let’s connect the artboards together.


Connecting artboards together
Connecting artboards together. (Large preview)

Once we do that, we’ll find a new “Voice” option under the “Trigger”. When we select this option, we’ll see a “Command” field that we can use to enter an utterance — this is what XD will actually be listening for. Users will need to speak this command to activate the trigger.


Setting a voice trigger in Adobe XD.
Setting a voice trigger in Adobe XD. (Large preview)

That’s all! We’ve defined our first voice interaction. Now, users can say something, and a prototype will respond to it. But we can make this interaction much more powerful by adding speech playback. As I mentioned previously, speech playback allows a system to speak some words.

Select an entire second artboard, and click on the blue handle. Choose a “Time” trigger with a delay and set it to 0.2s. Under the action, you’ll find “Speech Playback”. We’ll write down what the virtual assistant speaks back to us.


Using the Command option to enter an utterance or speak a command to activate the trigger
(Large preview)

We’re ready to test our prototype. Select the first artboard, and clicking the play button in the top right will launch a preview window. When interacting with voice prototyping, make sure your mic is on. Then, hold down the spacebar to speak the voice command. This input triggers the next action in the prototype.

Use Auto-Animate To Make The Experience More Dynamic

Animation brings a lot of benefits to UI design. It serves clear functional purposes, such as:

  • communicating the spatial relationships between objects (Where does the object come from? Are those objects related?);
  • communicating affordance (What can I do next?)

But functional purposes aren’t the only benefits of animation; animation also makes the experience more alive and dynamic. That’s why UI animations should be a natural part of multimodal interfaces.

With “Auto-Animate” available in Adobe XD, it becomes much easier to create prototypes with immersive animated transitions. Adobe XD does all the hard work for you, so you don’t need to worry about it. All you need to do to create an animated transition between two artboards is simply duplicate an artboard, modify the object properties in the clone (properties such as size, position and rotation), and apply an Auto-Animate action. XD will automatically animate the differences in properties between each artboard.

Let’s see how it works in our design. Suppose we have an existing shopping list in Amazon Echo Show and want to add a new object to the list using voice. Duplicate the following artboard:


Artboard: shopping list.
Artboard: shopping list. (Large preview)

Let’s introduce some changes in the layout: Add a new object. We aren’t limited here, so we can easily modify any properties such as text attributes, color, opacity, position of the object — basically, any changes we make, XD will animate between them.


Two artboards: our original shopping list and its duplicate with a new item.
Two artboards: our original shopping list and its duplicate with a new item. (Large preview)

When you wire two artboards together in prototype mode using Auto-Animate in “Action”, XD will automatically animate the differences in properties between each artboard.


When you wire two artboards together in prototype mode using Auto-Animate in “Action”, XD will automatically animate the differences in properties between each artboard.
(Large preview)

And here’s how the interaction will look to users:

One crucial thing that requires mentioning: Keep the names of all of the layers the same; otherwise, Adobe XD won’t be able to apply the auto-animation.

Conclusion

We’re at the dawn of a user interface revolution. A new generation of interfaces — multimodal interfaces — not only will give users more power, but will also change the way users interact with systems. We will probably still have displays, but we won’t need keyboards to interact with the systems.

At the same time, the fundamental requirements for designing multimodal interfaces won’t be much different from those of designing modern interfaces. Designers will need to keep the interaction simple; focus on the user and their needs; design, prototype, test and iterate.

And the great thing is that you don’t need to wait to start designing for this new generation of interfaces. You can start today.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(ms, ra, al, yk, il)

Source: Smashing Magazine, Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD

Don’t Pay To Speak At Commercial Events

dreamt up by webguru in Uncategorized | Comments Off on Don’t Pay To Speak At Commercial Events

Don’t Pay To Speak At Commercial Events

Don’t Pay To Speak At Commercial Events

Vitaly Friedman



Setting up a conference isn’t an easy undertaking. It takes time, effort, patience, and attention to all the little details that make up a truly memorable experience. It’s not something one can take lightly, and it’s often a major personal and financial commitment. After all, somebody has to build a good team and make all those arrangements: flights, catering, parties, badges, and everything in between.

The work that takes place behind the scenes often goes unnoticed and, to an extent, that’s an indication that the planning went well. There are hundreds of accessible and affordable meet-ups, community events, nonprofit events, and small local groups — all fueled by incredible personal efforts of humble, kind, generous people donating their time on the weekends to create an environment for people to share and learn together. I love these events, and I have utter respect and admiration for the work they are doing, and I’d be happy to speak at these events and support these people every day and night, with all the resources and energy I have. These are incredible people doing incredible work; their efforts deserve to be supported and applauded.

Unlike these events, commercial and corporate conferences usually target companies’ employees and organizations with training budgets to send their employees for continuing education. There is nothing wrong with commercial conferences per se and there is, of course, a wide spectrum of such events — ranging from single-day, single-track gatherings with a few speakers, all the way to week-long multi-track festivals with a bigger line-up of speakers. The latter tend to have a higher ticket price, and often a much broader scope. Depending on the size and the reputation of the event, some of them have more or less weight in the industry, so some are perceived to be more important to attend or more prestigious to speak at.

Both commercial and non-commercial events tend to have the so-called Call For Papers (CFPs), inviting speakers from all over the world to submit applications for speaking, with a chance of being selected to present at the event. CFPs are widely accepted and established in the industry; however, the idea of CFPs is sometimes challenged and discussed, and not always kept in a positive light. While some organizers and speakers consider them to lower the barrier for speaking to new talent, for others CFPs are an easy way out for filling in speaking slots. The argument is that CFPs push diversity and inclusion to a review phase, rather than actively seeking it up front. As a result, accepted speakers might feel like they have been “chosen” which nudges them into accepting low-value compensation.

The key to a fair, diverse and interesting line-up probably lies somewhere in the middle. It should be the organizer’s job to actively seek, review, and invite speakers that would fit the theme and the scope of the event. Admittedly, as an organizer, unless you are fine with the same speakers appearing at your event throughout the years, it’s much harder to do than just setting up a call for speakers and wait for incoming emails to start showing up. Combining thorough curation with a phase of active CFPs submission probably works best, but it’s up to the organizer how the speakers are “distributed” among both. Luckily, many resources are highlighting new voices in the industry, such as WomenWhoDesign which is a good starting point to move away from “usual suspects” from the conference circuit.

Many events strongly and publicly commit to creating an inclusive and diverse environment for attendees and speakers with a Code of Conduct. The Code of Conduct explains the values and the principles of conference organizers as well as contact details in case any conflict or violation appears. The sheer presence of such a code on a conference website sends a signal to attendees, speakers, sponsors, and the team that there had been given some thought to creating an inclusive, safe, and friendly environment for everybody at the event. However, too often at commercial events, the Code of Conduct is considered an unnecessary novelty and hence is either neglected or forgotten.

Now, there are wonderful, friendly, professional, well-designed and well-curated commercial events with a stellar reputation. These events are committed to diverse and inclusive line-ups and they always at least cover speaker’s expenses, flights, and accommodation. The reason why they’ve gained reputation over years is because organizers can afford to continuously put their heart and soul into running these events year after year — mostly because their time and efforts are remunerated by the profit the conference makes.

Many non-commercial events, fueled by great ideas and hard work, may succeed the first, second, and third time, but unfortunately, it’s not uncommon for them to fade away just a few years later. Mostly because setting up and maintaining the quality of such events takes a lot of personal time, effort and motivation beyond regular work times, and it’s just really hard to keep it up without a backbone of a strong, stable team or company behind you.

Some conferences aren’t quite like that. In fact, I’d argue that some conferences are pretty much the exact opposite. It’s more likely for them to allocate resources in outstanding catering and lighting and video production on site rather than the core of the event: the speaker’s experience. What lurks behind the scenes of such events is a toxic, broken conference culture despite the hefty ticket price. And more often than not, speakers bear the burden of all of their conference-related expenses, flights, and accommodation (to say nothing of the personal and professional time they are already donating to prepare, rehearse, and travel to and from the event) from their own pockets. This isn’t right, and it shouldn’t be acceptable in our industry.

The Broken State Of Commercial Conferences

Personally, I’ve been privileged to speak at many events over the years and, more often than not, there was a fundamental mismatch between how organizers see speaking engagements and how I perceive them. Don’t get me wrong: speaking at tech conferences has tremendous benefits, and it’s a rewarding experience, full of adventures, networking, traveling, and learning; but it also takes time and effort, usually away from your family, your friends, and your company. For a given talk, it might easily take over 80 hours of work just to get all the research and content prepared, not to mention rehearsal and traveling time. That’s a massive commitment and time investment.

But many conference organizers don’t see it this way. The size of the event, with hundreds and thousands of people attending the conference, is seen as a fair justification for the lack of speaker/training budgets or diversity/student scholarships. It’s remarkably painful to face the same conversations over and over and over again: the general expectation is that speakers should speak for free as they’ve been given a unique opportunity to speak and that neither flights nor expenses should be covered for the very same reason.

It’s sad to see invitation emails delicately avoiding or downplaying the topics of diversity, honorarium, and expenses. Instead, they tend to focus on the size of the event, the brands represented there, the big names that have spoken in the past, and the opportunities such a conference provides. In fact, a good number of CFPs gently avoid mentioning how the conference deals with expenses at all. As a result, an applicant who needs their costs to be covered is often discriminated against, because an applicant, whose expenses will be covered by their company is preferred. Some events explicitly require unique content for the talk, while not covering any speaker expenses, essentially asking speakers to work for free.


Speaker stage at BTConf
Preparing for a talk is a massive commitment and time investment. Taking care of the fine details such as the confidence monitor and countdown on stage is one of those little things. (Large preview) (Image source: beyond tellerrand)

It’s disappointing (upon request) to receive quick-paced replies explaining that there isn’t really any budget for speakers, as employers are expected to cover flights and accommodation. Sometimes, as a sign of good faith, the organizers are happy to provide a free platinum pass which would grant exclusive access to all conference talks across all tracks (“worth $2500” or so). And sometimes it goes so far as to be exploitative when organizers offer a “generous” 50% discount off the regular ticket price, including access to the speakers’ lounge area where one could possibly meet “decision makers” with the opportunity and hope of creating unique and advantageous connections.

It’s both sad and frustrating to read that “most” speakers were “happy to settle for only a slot at the conference.” After all, they are getting an “incredible amount of exposure to decision makers.” Apparently, according to the track record of the conference, it “reliably” helped dozens of speakers in the past to find new work and connect with new C-level clients. Once organizers are asked again (in a slightly more serious tone), suddenly a speaker budget materializes. This basically means that the organizers are willing to pay an honorarium only to speakers that are actually confident enough to repeatedly ask for it.

And then, a few months later, it’s hurtful to see the same organizers who chose not to cover speaker expenses, publishing recordings of conference talks behind a paywall, further profiting from speakers’ work without any thought of reimbursing or subsidizing speakers’ content they are repackaging and reselling. It’s not uncommon to run it all under the premise of legal formalities, asking the speaker to sign a speaker’s contract upon arrival.

As an industry, we should and can be better than that. Such treatment of speakers shows a fundamental lack of respect for time, effort, and work done by knowledgeable and experienced experts in our industry. It’s also a sign of a very broken state of affairs that dominates many tech events. It’s not surprising, then, that web conferences don’t have a particularly good reputation, often criticized for being unfair, dull, a scam, full of sponsored sessions, lacking diversity or a waste of money.

Speakers, Make Organizers Want To Invite You

On a personal note, throughout all these years, I have rarely received consultancy projects from “exposure” on stage. More often than not, the time away from family and company costs much more than any honorarium provided. Neither did I meet many “decision-makers” in the speaker lounge as they tend to delicately avoid large gatherings and public spaces to avoid endless pitches and questions. One thing that large conferences do lead to is getting invitations to more conferences; however, expecting a big client from a speaking engagement at corporate events has proved to be quite unrealistic for me. In fact, I tend to get way more work from smaller events and meet-ups where you actually get a chance to have a conversation with people located in a smaller, intimate space.

Of course, everybody has their own experiences and decides for themselves what’s acceptable for them, yet my personal experience taught me to drastically lower my expectations. That’s why after a few years of speaking I started running workshops alongside the speaking engagements. With a large group of people attending a commercial event, full-day workshops can indeed bring a reasonable revenue, with a fair 50% / 50% profit split between the workshop coach and the conference organizer.

Admittedly, during the review of this article, I was approached by some speakers who have had very different experiences; they ended up with big projects and clients only after an active phase of speaking at large events. So your experience may vary, but the one thing I learned over the years is that it’s absolutely critical to keep reoccurring in industry conversations, so organizers will seize an opportunity to invite you to speak. For speakers, that’s a much better position to be in.

If you’re a new speaker, consider speaking for free at local meet-ups; it’s fair and honorable — and great training for larger events; the smaller group size and more informal setting allows you seek valuable feedback about what the audience enjoyed and where you can improve. You can also gain visibility through articles, webinars, and open-source projects. And an email to an organizer, featuring an interesting topic alongside a recorded talk, articles and open source projects can bring you and your work to their attention. Organizers are looking for knowledgeable and excited speakers who love and live what they are doing and can communicate that energy and expertise to the audience.

Of course, there may be times when it is reasonable to accept conditions to get an opportunity to reach potential clients, but this decision has to be carefully considered and measured in light of the effort and time investment it requires. After all, it’s you doing them a favor, not the other way around. When speaking at large commercial conferences without any remuneration, basically you are trading your name, your time and your money for the promise of gaining exposure while helping the conference sell tickets along the way.

Organizers, Allocate The Speaking Budget First

I don’t believe for a second that most organizers have bad intentions; nor do I believe that they mean to cut corners at all costs to maximize profit. From my conversations with organizers, I clearly see that they share the same goals that community events have, as they do their best to create a wonderful and memorable event for everybody involved, while also paying the bills for all the hard-working people who make the event happen. After all, the conference business isn’t an easy one, and you hardly ever know how ticket sales will go next year. Still, there seems to be a fundamental mismatch of priorities and expectations.

Setting up a conference is an honorable thought, but you need a comprehensive financial plan of what it costs and how much you can spend. As mentioned above, too many promising events fade away because they are powered by the motivation of a small group of people who also need to earn money with their regular job. Conference organizers deserve to get revenue to share across the team, as working on a voluntary basis is often not sustainable.


Sarah Drasner presenting on stage at ColdFront 2018
All organizers have the same goal: to create wonderful, memorable events for everybody involved. (Large preview) (Image source: ColdFront)

To get a better understanding of how to get there, I can only recommend the fantastic Conference Organizer’s Handbook by Peter-Paul Koch, which covers a general strategy for setting up a truly professional event from planning to pricing to running it — without burning out. Bruce Lawson also has prepared a comprehensive list of questions that could be addressed in the welcome email to speakers, too. Plus, Lara Hogan has written an insightful book on Demystifying Public Speaking which I can only highly encourage to look at as well.

Yes, venues are expensive, and yes, so is catering, and yes, so is AV and technical setup. But before allocating thousands on food, roll-ups, t-shirts, and an open bar, allocate decent budgets for speakers first, especially for new voices in the industry — they are the ones who are likely to spend dozens or hundreds of hours preparing that one talk.

Jared Spool noted while reviewing this article:

“The speaking budget should come before the venue and the catering. After all, the attendees are paying to see the speakers. You can have a middling venue and mediocre catering, but if you have an excellent program, it’s a fabulous event. In contrast, you can have a great venue and fantastic food, but if the speakers are boring or off topic, the event will not be successful. Speaking budgets are an investment in the value of the program. Every penny invested is one that pays back in multiples. You certainly can’t say the same for venue or food.”

No fancy bells and whistles are required; speaker dinners or speaker gifts are a wonderful token of attention and appreciation but they can’t be a replacement for covering expenses. It’s neither fair nor honest to push the costs over to speakers, and it’s simply not acceptable to expect them to cover these costs for exposure, especially if a conference charges attendees several hundred Euros (or Dollars) per ticket. By not covering expenses, you’re depriving the industry of hearing from those groups who can’t easily fund their own conference travel — people who care for children or other relatives; people with disabilities who can’t travel without their carer, or people from remote areas or low-income countries where a flight might represent a significant portion of even multiple months of their income.

Jared continues:

“The formula is:

Break_Even = Fixed_Costs/(Ticket_PriceVariable_Costs)

Costs, such as speakers and venue are the biggest for break-even numbers. Catering costs are mostly variable costs and should be calculated on a per-attendee basis, to then subtract them from the price. To calculate the speaker budget, determine what the ticket price and variable per-attendee costs are up front, then use the net margin from that to figure out how many speakers you can afford, by diving net margin into the total speaker budget. That will tell you how many tickets you must sell to make a profit. (If you follow the same strategy for the venue, you’ll know your overall break even and when you start making profit.) Consider paying a bonus to speakers who the audience rates as delivering the best value. Hence, you’re rewarding exactly what benefits the attendees.”

That’s a great framework to work within. Instead of leaving the speaker budget dependent on the ticket sales and variable costs, set the speaker budget first. What would be a fair honorarium for speakers? Well, there is no general rule of how to establish this. However, for smaller commercial events in Europe, it’s common to allocate the price of 3–5 tickets on each speaker. For a large conference with hundreds and thousands of attendees, three tickets should probably be a minimum, but it would also have to be distributed among simultaneous talks and hence depend on the number of tracks and how many attendees are expected per talk.


Attendees at the performance.now() conference in Amsterdam, 2018
Dear organizers, options matter. Keep in mind to label food (e.g. vegan/vegetarian, and so on). It’s the little details that matter most. (Large preview) (Image source: performance.now())

Provide an honorarium, even if it isn’t much. Also, ask speakers to collect all receipts, so you can cover them later, or provide a per diem (flat daily expenses coverage) to avoid the hassle with receipts. As a standard operating procedure, suggest buying the flight tickets for the speaker unless they’d love to do it on their own. Some speakers might not have the privilege to spend hundreds of dollars for a ticket and have to wait months for reimbursement. Also, it’s a nice gesture to organize pre-paid transport from and to the airport, so drivers with a sign will be waiting for a speaker at the arrival area. (There is nothing more frustrating than realizing that your cabbie accepts only local cash to pay for the trip — and that after a frustrating flight delay arriving late at night.)

Once all of these costs are covered, consider providing a mentor to help newcomers draft, refine, adjust, review and iterate the talk a few times, and set aside a separate time when they could visit the venue and run through their slides, just to get a feeling of what it’s going to be like on stage.

On a positive side, if you’ve ever wondered about a high speakers’ drop-out rate at your event, not covering expenses might be a good reason for it. If speakers are paying on their own, you shouldn’t expect them to treat the speaking engagement as a priority.

As Laurie Barth noted when reviewing this article:

“If you aren’t paid for your time, then you likely have less unpaid time to give to preparing your talk and/or have less incentive to prioritize the travel and time for the talk.”

The time, work, effort, and commitment of your speakers are what make the conference a success.

Organizer’s Checklist

  • Cover all speaker’s expenses by default, and outline what’s included from the very start (in invitation letters) and before someone invests their time in completing a CFP form;
  • Avoid hassle with receipts, and offer at least a flat per diem;
  • Suggest buying the flight tickets for the speaker rather than reimbursing later, and organize pre-paid transport pick-up if applicable,
  • Allocate budgets and honorarium for speakers, coaching and mentoring early on. Good content is expensive, and if your ticket prices can’t cover it, refine the conference format to make it viable;
  • Provide an option to donate an honorarium and expenses covered by companies towards diversity/student scholarship;
  • As a principle, never accept voiding the honorarium. If the speaker can’t be paid or their expenses can’t be covered, dedicate the funds to the scholarship or a charity, and be public about it;
  • Be honest and sincere about your expectations, and explain which expenses you cover and which not up front in the CFP or in the speaking invitation.

Speakers, Ask Around Before Agreeing To Speak

Think twice before submitting a proposal to conferences that don’t cover at least your costs despite a high ticket price. It’s not acceptable to be asked to pay for your own travel and accommodation. If an event isn’t covering your expenses, then you are paying to speak at their event. It might seem not to matter much if your time and expenses are covered by your employer but it puts freelancers and new speakers at a disadvantage. If your company is willing to pay for your speaking engagement, ask the organizers to donate the same amount to a charity of your choice, or sponsor a diversity/student scholarships to enable newcomers to speak at the event.

Come up with a fair honorarium for your time given your interest and the opportunity, and if possible, make exceptions for nonprofits, community events, or whenever you see a unique value for yourself. Be very thorough and selective with conferences you speak at, and feel free to ask around about how other speakers have been treated in the past. Look at past editions of the event and ask speakers who attended or spoke there about their experience as well as about the reputation of the conference altogether.

If you are new to the industry, asking around could be quite uncomfortable, but it’s actually a common practice among speakers, so they should be receptive to the idea. I’m very confident that most speakers would be happy to help, and I know that our entire team — Rachel, Bruce, me and the entire Smashing Crew would love to help, anytime.

Before committing to speak at a conference, ask questions. Ethan Marcotte has prepared a useful little template with questions about compensation and general treatment of speakers (thanks to Jared for the tip!). Ask about the capacity and expected attendance of the conference, and what the regular price of the ticket is. Ask what audience is expected, and what profile they have. Ask about conference accessibility, i.e. whether there will be talk captioning/transcripts available to the audience, or even sign language interpreters. Ask if there is a commitment to a diverse line-up of speakers. Ask if other speakers get paid, and if yes, how much. Ask if traveling and accommodation are covered for all speakers, by default. Ask if there is a way to increase honorarium by running a workshop, a review session or any other activities. Since you are dedicating your time, talents, and expertise to the event, think of it as your project, and value the time and effort you will spend preparing. Decide what’s acceptable to you and make exceptions when they matter.


Speaker presenting on stage at the ColdFront conference in 2018
Dear speakers, feel free to ask how other speakers have been treated in the past. It’s your right; don’t feel uncomfortable for asking what is important to you and want to know beforehand. (Large preview) (Image source: ColdFront)

As you expect a fair treatment by organizers, also treat organizers the same way. Respect organizers’ time and efforts. They are covering your expenses, but it doesn’t mean that it’s acceptable to spend a significant amount without asking for permission first. Obviously, unexpected costs might come up, and personal issues might appear, and most organizers will fully understand that. But don’t use the opportunity as a carte blanche for upscale cocktails or fancy meals — you probably won’t be invited again. Also, if you can’t come to speak due to occurring circumstances, suggest a speaker that could replace your session, and inform the organizer as soon as you are able to upfront.

Speaker’s Checklist

  • Think twice before applying to a large commercial event that doesn’t cover your expenses;
  • If your company is covering expenses, consider asking organizers to donate the same amount to a charity of your choice, or sponsor a diversity/student scholarship;
  • Be very thorough and selective with conferences you speak at, and ask how other speakers have been treated in the past;
  • Prepare a little template of questions to ask an organizer before confirming a speaking engagement;
  • Support nonprofits and local events if you can dedicate your time to speak for free;
  • Choose a fair honorarium for a talk, and decide on case-by-case basis;
  • Ask whether videos will be publicly available,
  • Ask about conference accessibility, i.e. whether there will be talk captioning/transcripts, or sign language interpreters,
  • Treat organizers with respect when you have to cancel your engagement or modify your arrangements.

Our Industry Deserves Better

As an attendee, you always have a choice. Of course, you want to learn and get better, and you want to connect with wonderful like-minded people like yourself. However, be selective choosing the conference to attend next. More often than not, all the incredible catering and free alcohol all night long might be carried on the shoulders of speakers speaking for free and paying their expenses from their own pockets. Naturally, conferences that respect speakers’ time and professional skills compensate them and cover their expenses.

So support conferences that support and enable tech speakers. There are plenty of them out there — it just requires a bit more effort to explore and decide which event to attend next. Web conferences can be great, wonderful, inspirational, and friendly — regardless of whether they are large commercial conferences of small community-driven conferences — but first and foremost they have to be fair and respectful while covering the very basics first. Treating speakers well is one of these basics.


I’d like to kindly thank Rachel Andrew, Bruce Lawson, Jesse Hernandez, Amanda Annandale, Mariona Ciller, Sebastian Golasch, Jared Spool, Peter-Paul Koch, Artem Denysov, Markus Gebka, Stephen Hay, Matthias Meier, Samuel Snopko, Val Head, Rian Kinney, Jenny Shen, Luc Poupard, Toni Iordanova, Lea Verou, Niels Leenheer, Cristiano Rastelli, Sara Soueidan, Heydon Pickering, David Bates, Mariona C. Miller, Vadim Gorbachev, David Pich, Patima Tantiprasut, Laurie Barth, Nathan Curtis, Ujjwal Sharma, Lea Verou, Jesse Hernandez, Amanda Annandale, Benjamin Hong, Bruce Lawson, Matthias Ott, Scott Gould, Charis Rooda, Zach Leatherman, Marcy Sutton, Bertrand Lirette, Roman Kuba, Eva Ferreira, Sara Soueidan, Joe Leech, Yoav Weiss, Markus Seyfferth and Bastian Widmer for reviewing the article.

Smashing Editorial
(ra, il)

Source: Smashing Magazine, Don’t Pay To Speak At Commercial Events

Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Anselm Hannemann



It’s the last edition of this year, and I’m pretty stoked what 2018 brought for us, what happened, and how the web evolved. Let’s recap that and remind us of what each of us learned this year: What was the most useful feature, API, library we used? And how have we personally changed?

For this month’s update, I’ve collected yet another bunch of articles for you. If that’s not enough reading material for you yet, you can always find more in the archive or the Evergreen list which contains the most important articles since the beginning of the Web Development Reading List. I hope your days until the end of the year won’t be too stressful and wish you all the best. See you next year!

News

  • Microsoft just announced that they’ll change their Edge strategy: They’re going to use Chromium as the new browser engine for Desktop instead of EdgeHTML and might even provide Microsoft Edge for macOS. They’ll also help with development on the Blink engine from now on.
  • Chrome 71 is out and brings relative time support via the Internationalization API. Also new is that speech synthesis now requires user activation.
  • Safari Technology Preview 71 is out, bringing supported-color-schemes in CSS and adding Web Authentication as an experimental feature.
  • Firefox will soon offer users a browser setting to block all permission requests automatically. This will affect autoplaying videos, web notifications, geolocation requests, camera and microphone access requests. The need to auto-block requests shows how horribly wrong developers are using these techniques. Sad news for those who rely on such requests for their services, like WebRTC calling services, for example.

General

  • We’ve finally come up with ways to access and use websites offline with amazing technology. But one thing we forgot about is that for the past thirty years we taught people that the web is online, so most people don’t know that offline usage even exists. A lesson in user experience design and the importance of reminding us of the history of the medium we’re building for.

UI/UX

  • Matthew Ström wrote about the importance of fixing things later and not trying to be perfect.
  • A somewhat satiric resource about the state of UX in 2019.
  • Erica Hall shows us examples of why most of ‘UX design’ is a myth and why not only design makes a great product but also the right product strategy and business model. The best example why you should read this is when Erica writes “Virgin America. Rdio. Google Reader. Comcast. Which of these offered a good experience? Which of these still exists?” A truth you can’t ignore and, luckily, this is not a pessimistic but very thought-provoking article with great tips for how we can use that knowledge to improve our products. With strategy, with design, with a business model that fits.

Illustration of a woman with a tablet running some design software where her face should be.
After curating and sharing 2,239 links with 264,016 designers around the world, the folks at UX Collective have isolated a few trends in what the UX industry is writing, talking, and thinking about. (Image credit; illustration by Camilla Rosa)

Tooling

HTML & SVG

Accessibility

CSS

JavaScript

  • Google is about to bring us yet another API: the Badging API allows Web Desktop Apps to indicate new notifications or similar. The spec is still in discussion, and they’d be happy to hear your thoughts about it.
  • Hidde de Vries explains how we can use modern JavaScript APIs to scroll an element into the center of the viewport.
  • Available behind flags in Chrome 71, the new Background Fetch makes it possible to fetch resources that take a while to load — movies, for example — in the background.
  • Pete LePage explains how we can use the Web Share Target API to register a service as Share Target.
  • Is it still a good idea to use JavaScript for loading web fonts? Zach Leatherman shares why we should decide case by case and why it’s often best to use modern CSS and font-display: swap;.
  • Doka is a new standalone JavaScript image editor worth keeping in mind. While it’s not a free product, it features very handy methods for editing with a pleasant user experience, and by paying an annual fee, you ensure that you get bugfixes and support.
  • The Power of Web Components” shares the basic concepts, how to start using them, and why using your own HTML elements instead of gluing HTML, the related CSS classes, and a JavaScript trigger together can simplify things so much.

Security

Privacy

  • Do you have a husband or wife? Kids? Other relatives? Then this essential guide to protecting your family’s data is something you should read and turn into action. The internet is no safe place, and you want to ensure your relatives understand what they’re doing — and it’s you who can protect them by teaching them or setting up better default settings.

Web Performance


Comparison of the quality of JPG and WebP images
WebP offers both performance and features. Ire Aderinokun shares why and how to use it. (Image credit)

Work & Life

  • Shana Lynch tells us what makes someone an ethical business leader, which values are important, how to stand upright when things get tough, and how to prepare for uncomfortable situations upfront.
  • Ozoemena Nonso tries to explain why we often aren’t happy. The thief of our happiness is not comparing ourselves with others; it’s that we struggle to get the model of comparison right. An incredibly good piece of life advice if you compare yourself with others often and feel that your happiness suffers from it.
  • A rather uncommon piece of advice: Why forcing others to leave their comfort zone might be a bad idea.
  • Sandor Dargo on how he managed to avoid distractions during work time and do his job properly again.
  • Paul Robert Lloyd writes about Cennydd Bowles’ book “Future Ethics” and while explaining what it is about, he also points out the challenges of ethics with a simple example.
  • Jeffrey Silverstein is a teacher and struggled a lot with finding time for side projects while working full-time. Now he found a solution which he shares with us in this great article about “How to balance full-time work with creative projects.” An inspiring read that I can totally relate to.
  • Ben Werdmüller shares his thoughts on why lifestyle businesses are massively underrated. But what’s a lifestyle business? He defines them as non-venture-funded businesses that allow their owners to maintain a certain level of income but not more. As a fun sidenote, this article shows how crazy rental prizes have become on the U.S. West Coast.
  • Jake Knapp shares how he survived six years with a distraction-free smartphone — no emails, no notifications. And he has some great tips for us and an exercise to try out. I recently moved all my apps into one folder on the second screen to ensure I need to search for the app which usually means I really want to open it and don’t just do it to distract myself.
  • Ryan Avent wrote about why we work so hard. This essay is well-researched and explains why we see work as crucial, why we fall in love with it, and why our lifestyle and society embraces to work harder all the time.

An illustration of a hand holding a phone. The phone shows a popup saying: Wait seriously? You wanna delete Gmail? Are you nuts?
Jake Knapp spent six years with a distraction-free phone: no email, no social media, no browser. Now he shares what he learned from it and how you can try your own low-stress experiment. (Image credit)

Going Beyond…

Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Collective #476

dreamt up by webguru in Uncategorized | Comments Off on Collective #476



C476_quicklink

quicklink

Quicklink attempts to make navigations to subsequent pages load faster by prefetches URLs to the links when the browser is idle.

Check it out





C476_game

Under

Under is a great little game written in JavaScript and GLSL with procedural graphics. By Weston C. Beecroft.

Check it out





C476_cssinjs

CSS-in-JS or CSS-and-JS

John Polacek has built something with fashioned CSS and JS and then again with new fangled CSS-in-JS. His message is: either approach is fine, do what is right for you.

Check it out





C476_drone

Drone

A fantastic demo using Three.js Unreal Bloom effect by Baron Watts.

Check it out


C476_cssprogramming

Programming CSS

Jeremy Keith reminds us how powerful CSS selectors are.

Read it







Collective #476 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #476

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

dreamt up by webguru in Uncategorized | Comments Off on How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

Manish Dudharejia



Visuals have played a critical role in the marketing and advertising industry since their inception. For years, marketers have relied on images, videos, and infographics to better sell products and services. The importance of visual media has increased further with the rise of the Internet and consequently, of social media.

Lately, gifographics (animated infographics) have also joined the list of popular visual media formats. If you are a marketer, a designer, or even a consumer, you must have come across them. What you may not know, however, is how to make gifographics, and why you should try to add them to your marketing mix. This practical tutorial should give you answers to both questions.

In this tutorial, we’ll be taking a closer look at how a static infographic can be animated using Adobe Photoshop, so some Photoshop knowledge (at least the basics) is required.

What Is A Gifographic?

Some History

The word gifographic is a combination of two words: GIF and infographic. The term gifographic was popularized by marketing experts (and among them, by Neil Patel) around 2014. Let’s dive a little bit into history.

CompuServe introduced the GIF ( Graphics Interchange Format) on June 15, 1987, and the format became a hit almost instantly. Initially the use of the format remained somewhat restricted owing to patent disputes in the early years (related to the compression algorithm used in GIF files — LZW) but later, when most GIF patents expired, and owing to their wide support and portability, GIFs gained a lot in popularity which even lead the word “GIF” to become “Word of the year” in 2012. Even today, GIFs are still very popular on the web and on social media(*).

The GIF is a bitmap image format. It supports up to 8 bits per pixel so a single GIF can use a limited palette of up to 256 different colors (including — optionally — one transparent color). The Lempel–Ziv–Welch (LZW) is a lossless data compression technique that is used to compress GIF images, which in turn, reduces the file size without affecting their visual quality. What’s more interesting though, is that the format also supports animations and allows a separate palette of up to 256 colors for each animation frame.

Tracing back in history as to when the first infographic was created is much more difficult, but the definition is easy — the word “infographic” comes from “information” and “graphics,” and, as the name implies, an infographic serves the main purpose of presenting information (data, knowledge, etc.) quickly and clearly, in a graphical way.

In his 1983 book The Visual Display of Quantitative Information, Edward Tufte gives a very detailed definition for “graphical displays” which many consider today to be one of the first definitions of what infographics are, and what they do: to condense large amounts of information into a form where it will be more easily absorbed by the reader.

A Note On GIFs Posted On The Web (*)

Animated GIF images posted to Twitter, Imgur, and other services most often end up as H.264 encoded video files (HTML5 video), and are technically not GIFs anymore when viewed online. The reason behind this is pretty obvious — animated GIFs are perhaps the worst possible format to store video, even for very short clips, as unlike actual video files, GIF cannot use any of the modern video compression techniques. (Also, you can check this article: “Improve Animated GIF Performance With HTML5 Video” which explains how with HTML5 video you can reduce the size of GIF content by up to 98% while still retaining the unique qualities of the GIF format.)

On the other hand, it’s worth noting that gifographics most often remain in their original format (as animated GIF files), and are not encoded to video. While this leads to not-so-optimal file sizes (as an example, a single animated GIF in this “How engines work?” popular infographic page is between ~ 500 KB and 5 MB in size), on the plus side, the gifographics remain very easy to share and embed, which is their primary purpose.

Why Use Animated Infographics In Your Digital Marketing Mix?

Infographics are visually compelling media. A well-designed infographic not only can help you present a complex subject in a simple and enticing way, but it can also be a very effective mean of increasing your brand awareness as part of your digital marketing campaign.

Remember the popular saying, “A picture is worth a thousand words”? There is a lot of evidence that animated pictures can be even more successful and so recently motion infographics have witnessed an increase in popularity owing to the element of animation.

From Boring To Beautiful

They can breathe life into sheets of boring facts and mundane numbers with the help of animated charts and graphics. Motion infographics are also the right means to illustrate complex processes or systems with moving parts to make them more palatable and meaningful. Thus, you can easily turn boring topics into visually-engaging treats. For example, we created the gifographic “The Most Important Google Search Algorithm Updates Of 2015” elaborating the changes Google made to its search algorithm in 2015.

Cost-Effective

Gifographics are perhaps the most cost-effective alternative to video content. You don’t need expensive cameras, video editing, sound mixing software, and a shooting crew to create animated infographics. All it takes is a designer who knows how to make animations by using Photoshop or similar graphic design tools.

Works For Just About Anything

You can use a gifographic to illustrate just about anything in bite-sized sequential chunks. From product explainer videos to numbers and stats, you can share anything through a GIF infographic. Animated infographics can also be interactive. For example, you can adjust a variable to see how it affects the data in an animated chart.

Note: An excellent example of an interactive infographic is “Building An Interactive Infographic With Vue.js” written by Krutie Patel. It was built with the help of Vue.js, SVG and GSAP (GreenSock Animation Platform).

SEO Boost

As a marketer, you are probably aware that infographics can provide a substantial boost to your SEO. People love visual media. As a result, they are more likely to share a gifographic if they liked it. The more your animated infographics are shared, the higher will be the boost in site traffic. Thus, gifographics can indirectly help improve your SEO and, therefore, your search engine rankings.

How To Create A Gifographic From An Infographic In Photoshop

Now that you know the importance of motion in infographics, let’s get practical and see how you can create your first gifographic in Photoshop. And if you already know how to make infographics in Photoshop, it will be even easier for you to convert your existing static infographic into an animated one.

Step 1: Select (Or Prepare) An Infographic

The first thing you need to do is to choose the static infographic that you would like to transform into a gifographic. For learning purposes you can animate any infographic, but I recommend you to pick up an image that has elements that are suitable for animation. Explainers, tutorials, and process overviews are easy to convert into motion infographics.

If you are going to start from scratch, make sure you have first finished the static infographic to the last detail before proceeding to the animation stage as this will save you a lot of time and resources — if the original infographic keeps changing you will also need to rework your gifographic.

Next, once you have finalized the infographic, the next step is to decide which parts you are going to animate.


Finalize Your Infographic
Infographic finalization (Large preview)

Step 2: Decide What The Animation Story Will Be

You can include some — or all — parts of the infographic in the animation. However, as there are different ways to create animations, you must first decide on the elements you intend to animate, and how. In my opinion, sketching (outlining) various animation case scenarios on paper is the best way to pick your storyline. It will save you a lot of time and confusion down the road.

Start by deciding which “frames” you would like to include in the animation. At this stage, frames will be nothing else but rough sketches made on sheets of paper. The higher the number of frames, the better the quality of your gifographic will be.

You may need to divide the animated infographic into different sections. So, be sure to choose an equal count of frames for all parts. If not, the gifographic will look uneven with each GIF file moving at a different speed.


Pick Your Animation Storyline
Deciding and picking your animation story (Large preview)

Step 3: Create The Frames In Photoshop

Open Adobe Photoshop to create different frames for each section of the gifographic. You will need to cut, rotate, and move the images painstakingly. You will need to remember the ultimate change you made to the last frame. You can use Photoshop ruler for the same.

You will need to build your animation from Layers in Photoshop. But, in this case, you will be copying all Photoshop layers together and editing each layer individually.

You can check the frames one by one by hiding/showing different layers. Once you have finished creating all the frames, check them for possible errors.

Create Frames in Photoshop. (Large preview)

You can also create a short Frame Animation using just the first and the last frame. You need to select both frames by holding the Ctrl/Cmd key (Windows/Mac). Now click on “Tween.” Select the number of frames you want to add in between. Select First frame if you want to add the new frames between the first and the last frames. Selecting “Previous Frame” option will add frames between your current selection and the one before it. Check the “All Layers” option to add all the layers from your selections.


How to create Short Frame Animation
Short frame animation (Large preview)

Step 4: Save PNG (Or JPG) Files Into A New Folder

The next step is to export each animation frame individually into PNG or JPG format. (Note: JPG is a lossy format, so PNG would be usually a better choice.)

You should save these PNG files in a separate folder for the sake of convenience. I always number the saved images as per their sequence in the animation. It’s easy for me to remember that “Image-1” will be the first image in the sequence followed by “Image-2,” “Image-3,” and so on. Of course, you can save them in a way suitable for you.


How to Save JPG Files in a New Folder
Saving JPG files in a new folder (Large preview)

Step 5: “Load Files Into Stack”

Next comes loading the saved PNG files to Photoshop.

Go to the Photoshop window and open File > Scripts > Load files into Stack…

A new dialog box will open. Click on the “Browse” button and open the folder where you saved the PNG files. You can select all files at once and click “OK.”

Note: You can check the “Attempt to Automatically Align Source Images” option to avoid alignment issues. However, if your source images are all the same size, this step is not needed. Furthermore, automatic alignment can also cause issues in some cases as Photoshop will move the layers around in an attempt to try to align them. So, use this option based on the specific situation — there is no “one size fits them all” recipe.

It may take a while to load the files, depending on their size and number. While Photoshop is busy loading these files, maybe you can grab a cup of coffee!

Load Files into Stack. (Large preview)

Step 6: Set The Frames

Once the loading is complete, go to Window > Layers (or you can press F7) and you will see all the layers in the Layers panel. The number of Layers should match the number of frames loaded into Photoshop.

Once you have verified this, go to Window > Timeline. You will see the Timeline Panel at the bottom (the default display option for this panel). Choose “Create Frame Animation” option from the panel. Your first PNG file will appear on the Timeline.

Now, Select “Make Frames from Layers” from the right side menu (Palette Option) of the Animation Panel.

Note: Sometimes the PNG files get loaded in reverse, making your “Image-1” appear at the end and vice versa. If this happens, select “Reverse Layers” from Animation Panel Menu (Palette Option) to get the desired image sequence.

Set the Frames. (Large preview)

Step 7: Set The Animation Speed

The default display time for each image is 0.00 seconds. Toggling this time will determine the speed of your animation (or GIF file). If you select all the images, you can set the same display time for all of them. Alternatively, you can also set up different display time for each image or frame.

I recommend going with the former option though as using the same animation time is relatively easy. Also, setting up different display times for each frame may lead to a not-so-smooth animation.

You can also set custom display time if you don’t want to choose from among the available choices. Click the “Other” option to set a customized animation speed.

You can also make the animation play in reverse. Copy the Frames from the Timeline Pallet and choose “Reverse Layers” option. You can drag frames with the Ctrl key (on Windows) or the Cmd key (on Mac).

You can set the number of times the animation should loop. The default option is “Once.” However, you can set a custom loop value using the “Other” option. Use the “Forever” option to keep your animation going in a non-stop loop.

To preview your GIF animation, press the Enter key or the “Play” button at the bottom of the Timeline Panel.

Set the Animation Speed. (Large preview)

Step 8: Ready To Save/Export

If everything goes according to plan, the only thing left is to save (export) your GIF infographic.

To Export the animation as a GIF: Go to File > Export > Save for Web (Legacy)

  1. Select “GIF 128 Dithered” from the “Preset” menu.
  2. Select “256” from the “Colors” menu.
  3. If you will be using the GIF online or want to limit the file size of the animation, change Width and Height fields in the “Image Size” options accordingly.
  4. Select “Forever” from the “Looping Options” menu.

Click the “Preview” button in the lower left corner of the Export window to preview your GIF in a web browser. If you are happy with it, click “Save” and select a destination for your animated GIF file.

Note: There are lots of options that control the quality and file size of GIFs — number of colors, amount of dithering, etc. Feel free to experiment until you achieve the optimal GIF size and animation quality.

Your animated infographic is ready!


How to Save Your GIF infographic
Saving your GIF infographic (Large preview)

Step 9 (Optional): Optimization

Gifsicle (a free command-line program for creating, editing, and optimizing animated GIFs), and other similar GIF post-processing tools can help reduce the exported GIF file size beyond Photoshop’s abilities.

ImageOptim is also worth mentioning — dragging files to ImageOptim will directly run Gifsicle on them. (Note: ImageOptim is Mac-only but there are quite a few alternative apps available as well.)

Troubleshooting Tips

You are likely to run into trouble at two crucial stages.

Adding New Layers

Open the “Timeline Toolbar” drop-down menu and select the “New Layers Visible in all Frames” option. It will help tune your animation without any hiccups.


How to Add New Layers Visible in all Frames
Adding new layers (Large preview)

Layers Positioning

Sometimes, you may end up putting layers in the wrong frames. To fix this, you can select the same layer in a fresh frame and select “Match Layer Across Frames” option.


How to Match Layers Across Frames
Positioning layers (Large preview)

Gifographic Examples

Before wrapping this up, I would like to share a few good examples of gifographics. Hopefully, they will inspire you just as they did me.

  1. Google’s Biggest Search Algorithm Updates Of 2016
    This one is my personal favorite. Incorporating Google algorithm updates in a gifographic is difficult owing to its complexity. But, with the use of the right animations and some to-the-point text, you can turn a seemingly complicated subject into an engaging piece of content.
  2. Virtual Reality: A Fresh Perspective For Marketers
    This one turns a seemingly descriptive topic into a smashing gifographic. The gifographic breaks up the Virtual Reality topic into easy-to-understand numbers, graphs, and short paragraphs with perfect use of animation.
  3. How Google Works
    I enjoy reading blog posts by Neil Patel. Just like his post, this gifographic is also comprehensive. The only difference is Neil conveys the essential message through accurately placed GIFs instead of short paragraphs. He uses only the colors that Google’s logo comprises.
  4. The Author Rank Building Machine
    This one lists different tips to help you become an authoritative writer. The animation is simple with a motion backdrop of content creation factory. Everything else is broken down into static graphs, images, and short text paragraphs. But, the simple design works, resulting in a lucid gifographic.
  5. How Car Engines Work
    Beautifully illustrated examples of how car engines work (petrol internal combustion engines and hybrid gas/electric engines). Btw, it’s worth noting that in some articles, Wikipedia is also using animated GIFs for some very similar purposes.

Wrapping Things Up

As you can see, turning your static infographic into an animated one is not very complicated. Armed with Adobe Photoshop and some creative ideas, you can create engaging and entertaining animations, even from scratch.

Of course, your gifographic can have multiple animated parts and you’ll need to work on them individually, which, in turn, will require more planning ahead and more time. (Again, a good example of a rather complex gifographic would be the one shown in “How Car Engines Work?” where different parts of the engine are explained in a series of connected animated images.) But if you plan well, sketch, create, and test, you will succeed and you will be able to make your own cool gifographics.

If you have any questions, ask me in the comments and I’ll be happy to help.

Further Resources

Smashing Editorial
(mb, ra, yk, il)

Source: Smashing Magazine, How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

The Importance Of Macro And Micro-Moment Design

dreamt up by webguru in Uncategorized | Comments Off on The Importance Of Macro And Micro-Moment Design

The Importance Of Macro And Micro-Moment Design

The Importance Of Macro And Micro-Moment Design

Susan Weinschenk



(This article is kindly sponsored by Adobe.) When you design the information architecture, the navigation bars of an application, or the overall layout and visual design of a product, then you are focusing on macro design. When you design (one part of a page, one form, or one single task and interaction), then you are focusing on micro-moment design.

In my experience, designers often spend a lot of time on macro design issues, and sometimes less so on critical micro-moment design issues. That might be a mistake.

Here’s an example of how critical micro-moment design can be.

I read a lot of books. We are talking over a hundred books a year. I don’t even know for sure how many books I read, and because I read so many books, I am a committed library patron. Mainly for reading fiction for fun (and even sometimes for reading non-fiction), I rely on my library to keep my Kindle full of interesting things to read.

Luckily for me, the library system in my county and in my state is pretty good in terms of having books available for my Kindle. Unluckily, this statewide library website and app need serious UX improvements.

I was thrilled when my library announced that instead of using a (poorly designed) website (that did not have a mobile responsive design), the library was rolling out a brand new mobile app, designed specifically to optimize the experience on a mobile phone. “Yay!” I thought. “This will be great!”

Perhaps I spoke too soon.

Let me walk you through the experience of signing into the app. First, I downloaded the app and then went to log in:


A screenshot of signing into Wisconsin’s digital library
(Large preview)

I didn’t have my library card with me (I was traveling), and I wasn’t sure what “Sign in with OverDrive” was about, but I figured I could select my library from the list, so I pressed on the down arrow.


Pressing on the down arrow to find out more details on how to log into Wisconsin’s digital library
(Large preview)

“Great,” I thought. Now I can just scroll to get to my library. I know that my library is in Marathon County here in Wisconsin. In fact, I know from using the website that they call my library: “Marathon County, Edgar Branch” or something similar, since I live in a village called Edgar, so I figured that would be what I should look for especially since I could see that the list went from B (Brown County) to F (Fond du Lac Public Library) with no E for Edgar showing. So I proceeded to scroll.

I scrolled for a while, looking for M (in hope of finding Marathon).


Searching for the desired library name in alphabetical order in Wisconsin’s digital library
(Large preview)

Hmmm. I see Lone Rock, and then the next one on the list is McCoy. I know that I am in Marathon County, and that in fact, there are several Marathon County libraries. Yet, we seem to have skipped Marathon in the list.

I keep scrolling.


Scrolling in the list of library names on the site
(Large preview)

Uh oh. We got to the end of the list (to the W’s), but now we seem to be starting with A again. Well, then, perhaps Marathon will now appear if I keep scrolling.

You know how many libraries there are in Wisconsin and are on this list? I know because as I started to document this user experience I decided to count the number of entries on this list (only a crazy UX professional would take time to do this, I think).

There are 458 libraries on this list, and the list kept getting to the end of the alphabet and then for some reason starting over. I never did figure out why.

Finally, though, I did get to Marathon!


Scrolling further down the list just to find a number of libraries named “Marathon County Public Library”
(Large preview)

And then I discovered I was really in trouble since several libraries start with “Marathon County Public Library”. Since the app only shows the first 27 or so characters, I don’t know which one is mine.

You know what I did at this point?

I decided to give up. And right after I decided that, I got this screen (as “icing on the cake” so to speak):


Error message on Wisconsin’s digital library
(Large preview)

Did you catch the “ID” that I’m supposed to reference if I contact support? Seriously?

This is a classic case of micro-moment design problems.

I can guess that by now some of you are thinking, “Well, that wouldn’t happen to (me, my team, an experienced UX person).” And you might be right. Especially this particular type of micro-moment design fail.

However, I can tell you that I see micro-moment design failures in all kinds of apps, software, digital products, websites, and from all kinds of companies and teams. I’ve seen micro-moment design failures from organizations with and without experienced UX teams, tech-savvy organizations, customer-centric organizations, large established companies and teams, and new start-ups.

Let’s pause for a moment and contrast micro-moment design with macro design.

Let’s say that you are hired to evaluate the user experience of a product. You gather data about the app, the users, the context, and then you start walking through the app. You notice a lot of issues that you want to raise with the team — some large, some small:

  • There are some inconsistencies from page-to-page/screen-to-screen in the app. You would like to see whether they have laid out pages on a grid and if that can be improved;
  • You have questions about whether the color scheme meets branding guidelines;
  • You suspect there are some information architecture issues. The organization of items in menus and the use of icons seems not quite intuitive;
  • One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.

There are many ways to categorize user experience design factors, issues, and/or problems. Ask any UX professional and you will probably get a similar, but slightly different list. For example, UX people might think about the conceptual model, visual design, information architecture, navigation, content, typography, context of use, and more. Sometimes, though, it might be useful to think about UX factors, issues, and design in terms of just two main categories: macro design and micro-moment design.

In the example above, most of the factors on the list were macro design issues: inconsistencies in layout, color schemes, and information architecture. Some people talk about macro design issues as “high-level design” or “conceptual model design”. These are UX design elements that cross different screens and pages. These are UX design elements that give hints and cues about what the user can do with the app, and where to go next.

Macro design is critical if you want to design a product that people want to use. If the product doesn’t match the user’s mental model, if the product is not “intuitive” — these are often (not always, but often) macro design issues.

Which means, of course, that macro design is very important.

It’s not just micro-moment design problems that cause trouble. Macro design issues can result in massive UX problems, too. But macro design issues are more easily spotted by an experienced UX professional because they can be more obvious, and macro design usually gets time devoted to it relatively early in the design process.

If you want to make sure you don’t have macro design problems then do the following:

  • Do the UX research upfront that you need to do in order to have a good idea of the users’ mental models. What does the user expect to do with this product? What do they expect things to be called? Where do they expect to find information?
  • For each task that the user is going to do, make sure you have picked one or two “objects” and made them obvious. For instance, when the user opens an app for looking for apartments to rent the objects should be apartments, and the views of the objects should be what they expect: List, detail, photo, and map. If the user opens an app for paying an insurance bill, then the objects should be policy, bill, clinic visit, while the views should be a list, detail, history, and so on.
  • The reason you do all the UX-research-related things UXers do (such as personas, scenarios, task analyses, and so on) is so that you can design an effective, intuitive macro design experience.

It’s been my experience, however, that teams can get caught up in designing, evaluating, or fixing macro design problems, and not spend enough time on micro-moment design.

In the example earlier, the last issue is a micro-moment design issue:

  • One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.

And the library example at the start of the article is also an example of micro-moment design gone awry.

Micro-moment design refers to problems with one very specific page/form/task that someone is trying to accomplish. It’s that “make-or-break” moment that decides not just whether someone wants to use the app, but whether they can even use the app at all, or whether they give up and abandon, or end up committing errors that are difficult to correct. Not being able to choose my library is a micro-moment design flaw. It means I can’t continue. I can’t use the app anymore. It’s a make-or-break moment for the app.

When we are designing a new product, we often focus on the macro design. We focus on the overall layout, information architecture, conceptual model, navigation model, and so on. That’s because we haven’t yet designed any micro-moments.

The danger is that we will forget to pay close attention to micro-moment design.

So, going back to our library example, and your possible disbelief that such a micro-moment design fail could happen on your watch. It can. Micro-moment design failures can happen for many reasons.

Here are a few common ones I’ve seen:

  • A technical change (for example, how many characters can be displayed in a field) is made after a prototype has been reviewed and tested. So the prototype worked well and did not have a UX problem, but the technical change occurred later, thereby causing a UX problem without anyone noticing.
  • Patterns and standards that worked well in one form or app are re-used in a different context/form/app, and something about the particular field for form in the new context means there is a UX issue.
  • Features are added later by a different person or team who does not realize the impact that particular feature, field, form has on another micro-moment earlier or later in the process.
  • User testing is not done, or it’s done on only a small part of the app, or it’s done early and not re-done later when changes are made.

If you want to make sure you don’t have micro-moment design problems then do the following:

  • Decide what are the critical make-or-break moments in the interface.
  • At each of these moments, decide what is it exactly that the user wants to do.
  • At each of these moments, decide what is it exactly that the product owner wants users to do.
  • Figure out exactly what you can do with design to make sure both of the above can be satisfied.
  • Make that something the highest priority of the interface.

Takeaways

Both macro and micro-moment design are critical to the user experience success of a product. Make sure you have a process for designing both, and that you are giving equal time and resources to both.

Identify the critical make-or-break micro-design moments when they finally do get designed, and do user testing on those as soon as you can. Re-test when changes are made.

Try talking about micro-moment design and macro design with your team. You may find that this categorization of design issues makes sense to them, perhaps more than whichever categorization scheme you’ve been using.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

Smashing Editorial
(cm, ms, il)

Source: Smashing Magazine, The Importance Of Macro And Micro-Moment Design

Protecting Your Site With Feature Policy

dreamt up by webguru in Uncategorized | Comments Off on Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

Rachel Andrew



One of the web platform features highlighted at the recent Chrome Dev Summit was Feature Policy, which aims to “allow site authors to selectively enable and disable use of various browser features and APIs.” In this article, I’ll take a look at what that means for web developers, with some practical examples.

In his introductory article on the Google Developers site, Eric Bidelman describes Feature Policy as the following:

“The feature policies themselves are little opt-in agreements between developer and browser that can help foster our goals of building (and maintaining) high-quality web apps.”

The specification has been developed at Google by as part of the Web Platform Incubator Group activity. The aim of Feature Policy is for us, as web developers, to be able to state our usage of a web platform feature, explicitly to the browser. By doing so, we make an agreement about our use, or non-use of this particular feature. Based on this the browser can act to block certain features, or report back to us that a feature it did not expect to see is being used.

Examples might include:

  1. I am embedding an iframe and I do not want the embedded site to be able to access the camera of my visitor;
  2. I want to catch situations where unoptimized images are deployed to my site via the CMS;
  3. There are many developers working on my project, and I would like to know if they use outdated APIs such as document.write.

All of these things can be tracked, blocked or reported on as part of Feature Policy.

How To Use Feature Policy

In order to use Feature Policy, the browser needs to know two things: which feature you are creating a policy for, and how you want that feature to be handled.

Feature-Policy: <directive> <allowlist>

The <directive> is the name of the feature that you are setting the policy on.

The current list of features (sourced from the presentation given at Chrome Dev Summit) are as follows:

  • accelerometer
  • ambient-light-sensor
  • autoplay
  • camera
  • document-write
  • encrypted-media
  • fullscreen
  • geolocation
  • gyroscope
  • layout-animations
  • lazyload
  • legacy-image-formats
  • magnetometer
  • midi
  • oversized-images
  • payment
  • picture-in-picture
  • speaker
  • sync-script
  • sync-xhr
  • unoptimized-images
  • unsized-media
  • usb
  • vertical-scroll
  • vr

The <allowlist> details how the feature can be used — if at all — and takes one or more of the following values.

  • *
    The most liberal policy, stating that the feature will be allowed in this document, and any iframes whether from this domain or elsewhere. May only be used as a single value as it makes no sense to enable everything and also pass in a list of domains, for example.
  • self
    The feature will be available in the document and any iframes, however, the iframes must have the same origin.
  • src
    Only applicable when using an iframe allow attribute. This allows a feature as long as the document loaded into it comes from the same origin as the URL in the iframe’s src attribute.
  • none
    Disables the feature for the document and any nested iframes. May only be used as a single value.
  • <origin(s)>
    The feature is allowed for specific origins; this means that you can specify a list of domains where the feature is allowed. The list of domains is space separated.

There are two methods by which you can enable feature policies on your site: You can send an HTTP Header, or use the allow attribute on an iframe.

HTTP Header

Sending an HTTP Header means that you can enable a feature policy for the page or entire site setting that header, and also anything embedded in the site. Headers can be set for your entire site at the web server or can be sent from your application.

For example, if I wanted to prevent the use of the geolocation API and I was using the NGINX web server, I could edit the configuration files for my site in NGINX to add the following header, which would prevent any document in my site and any iframe embedded in from using the geolocation API.

add_header Feature-Policy "geolocation none;";

Multiple policies can be set in a single header. To prevent geolocation and vibrate but allow unsized-media from the domain example.com I could set the following:

add_header Feature-Policy "vibrate none; geolocation none; unsized-media http://example.com;";

The allow Attribute On iFrames

If we are primarily concerned with what happens with the content in an iframe, we can use Feature Policy on the iframe itself; this benefits from slightly better browser support at the time of writing with Chrome and Safari supporting this use.

If I am embedding a site and do not want that site to use geolocation, camera or microphone APIs then my iframe would look like the following example:

<iframe allow="geolocation 'none'; camera 'none'; microphone 'none'">

You may already be familiar with the individual attributes which control the content of iframes allowfullscreen, allowpaymentrequest, and allowusermedia. These can be replaced by the Feature Policy allow attribute, and for browser compatibility reasons you can use both on an iframe. If you do use both attributes, then the most restrictive one will apply. The Google article shows an example of an iframe that uses allowfullscreen — meaning that the iframe is allowed to enter fullscreen, but then a conflicting Feature Policy of fullscreen none. These conflict, so the most restrictive policy wins and this iframe would not be allowed to enter fullscreen.

<iframe allowfullscreen allow="fullscreen 'none'" src="...">

The iframe element also has a sandbox attribute designed to manage support for many features. This feature was also added to Content Security Policy with a sandbox value which disables all sandbox features, which can then be opted back into selectively. There is some crossover between sandbox features and those controlled by Feature Policy, and Feature Policy does not seek to duplicate those values already covered by sandbox. It does, however, address some of the limitations of sandbox by taking a more fine-grained approach to managing these policies, rather than one of turning everything off globally as one large policy set.

Feature Policy And Reporting API

Feature Policy violations can be reported via the Reporting API, which means that you could develop a comprehensive set of policies tracking feature usage across your site. This would be completely transparent to your users but give you a huge amount of information about how features were being used.

Browser Support For Feature Policy

Currently, browser support for Feature Policy is limited to Chrome, however, in many cases where you are using Feature Policy during development and when previewing sites this is not necessarily a problem.

Many of the use cases I will outline below are usable right now, without causing any impact to site visitors who are using browsers without support.

When To Use Feature Policy

I really like the idea of being able to use Feature Policy to help back up decisions made when developing the site. Decisions which may well be written up in documents such as a performance budget, or as part of a GDPR audit, but which then become something we have to remember to preserve through the life of the site. This is not always easy when multiple people work on a site; people who perhaps weren’t involved during that initial decision making, or may simply be unaware of the requirements. We think a lot about third parties managing to somehow impact our site, however, sometimes our sites need protecting from ourselves!

Keeping An Eye On Third Parties

You could prevent a third-party site from accessing the camera or microphone using a feature policy on the iframe with the allow attribute. If the reason for embedding that site has nothing to do with those features, then disabling them means that the site can never start to ask for those. This could then be linked with your processes for ensuring GDPR compliance. As you audit the privacy impact of your site, you can build in processes for locking down the access of third parties by way of feature policy — giving you and your visitors additional security and peace of mind.

This usage does rely on browser support for Feature Policy to block the usage. However, you could use Feature Policy reporting mode to inform you of usage of these APIs if the third party changed what they would be doing. This would give you a very quick heads-up — essentially as soon as the first person using Chrome hits the site.

Selectively Enabling Features

We also might want to selectively enable some features which are normally blocked. Perhaps we wish to allow an iframe loading content from another site to use the geolocation feature in the browser. Chrome by default blocks this, but if you are loading content from a trusted site you could enable the cross-origin request using Feature Policy. This means that you can safely turn on features when loading content from another domain that is under your control.

Catching Use Of Outdated APIs And Poorly Performing Features

Feature Policy can be run in a report-only mode. It can then track usage of certain features and let you know when they are found on the site. This can be useful in many scenarios. If you have a very large site with a lot of legacy code, enabling Feature Policy would help you to track down the places that need attention. If you work with a large team (especially if developers often pull in some third party libraries of code), Feature Policy can catch things that you would rather not see on the site.

Dealing With Poorly Optimized Images

While most of the articles I’ve seen about Feature Policy concentrate on the security and privacy aspects, the features around image optimization really appealed to me, as someone who deals with a lot of content generated by technical and non-technical users. Feature Policy can be used to help protect the user experience as well as the performance of your site by preventing overly large — or unoptimized images — being downloaded by visitors.

In an ideal world, your CMS would deal with image management, ensuring that images were sensibly resized, optimized for the web and the context they will be displayed in. Real life is rarely that ideal world, however, and so sometimes the job of resizing and optimizing images is left to content editors to ensure they are not uploading huge images to the web. This is particularly an issue if you are using a static CMS with no content management layer on top of it. Even as a technical person, it is very easy to forget to resize that giant screenshot or camera image you popped into a folder as a placeholder.

Currently behind a flag in Chrome are features which can help. The idea behind these features is to highlight the problematic images so that they can be fixed — without completely breaking the site.

The unsized-media feature policy looks for images or video which do not have a size set in the HTML or CSS. When an unsized media element loads, it can cause the content on the page to reflow.

In order to prevent any unsized media being added to the site, set the following header. Media will then be displayed with a default size of 300×150 pixels. You will see your site loading with small media, and realize you have a problem to fix.

Feature-Policy: unsized-media 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The oversized-images feature policy checks to see that images are not much large than their container. If they are, a placeholder will be shown instead. This policy is incredibly useful to check that you are not sending huge desktop images to your mobile users.

Feature-Policy: oversized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The unoptimized-images feature policy checks to see if the data size of images in bytes is no more than 0.5x bigger than its rendering area in pixels. If this policy is enabled and images violate it, a placeholder will be shown instead of the image.

Feature-Policy: unoptimized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

Testing And Reporting On Feature Policy

Chrome DevTools will display a message to inform you that certain features have been blocked or enabled by a Feature Policy. If you have enabled Feature Policy on your site, you can check that this is working.

Support for Feature Policy has also been added to the Security Headers site, which means you can check for these along with headers such as Content Security Policy on your site — or other sites on the web.

There is a Chrome DevTools Extension which lets you toggle on and off different Feature Policies (also a great way to check your pages without needing to configure any headers).

If you would like to get into integrating your Feature Ppolicies with the Reporting API, then there is further information in terms of how to do this here.

Further Reading And Resources

I have found a number of resources, many of which I used when researching this article. These should give you all that you need to begin implementing Feature Policy in your own applications. If you are already using Content Security Policy, this seems an additional logical step towards controlling the way your site works with the browser to help ensure the security and privacy of people using your site. You have the added bonus of being able to use Feature Policy to help you keep on top of performance-damaging elements being added to your site over time.

Smashing Editorial
(il)

Source: Smashing Magazine, Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

dreamt up by webguru in Uncategorized | Comments Off on Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

Protecting Your Site With Feature Policy

Rachel Andrew



One of the web platform features highlighted at the recent Chrome Dev Summit was Feature Policy, which aims to “allow site authors to selectively enable and disable use of various browser features and APIs.” In this article, I’ll take a look at what that means for web developers, with some practical examples.

In his introductory article on the Google Developers site, Eric Bidelman describes Feature Policy as the following:

“The feature policies themselves are little opt-in agreements between developer and browser that can help foster our goals of building (and maintaining) high-quality web apps.”

The specification has been developed at Google by as part of the Web Platform Incubator Group activity. The aim of Feature Policy is for us, as web developers, to be able to state our usage of a web platform feature, explicitly to the browser. By doing so, we make an agreement about our use, or non-use of this particular feature. Based on this the browser can act to block certain features, or report back to us that a feature it did not expect to see is being used.

Examples might include:

  1. I am embedding an iframe and I do not want the embedded site to be able to access the camera of my visitor;
  2. I want to catch situations where unoptimized images are deployed to my site via the CMS;
  3. There are many developers working on my project, and I would like to know if they use outdated APIs such as document.write.

All of these things can be tracked, blocked or reported on as part of Feature Policy.

How To Use Feature Policy

In order to use Feature Policy, the browser needs to know two things: which feature you are creating a policy for, and how you want that feature to be handled.

Feature-Policy: <directive> <allowlist>

The <directive> is the name of the feature that you are setting the policy on.

The current list of features (sourced from the presentation given at Chrome Dev Summit) are as follows:

  • accelerometer
  • ambient-light-sensor
  • autoplay
  • camera
  • document-write
  • encrypted-media
  • fullscreen
  • geolocation
  • gyroscope
  • layout-animations
  • lazyload
  • legacy-image-formats
  • magnetometer
  • midi
  • oversized-images
  • payment
  • picture-in-picture
  • speaker
  • sync-script
  • sync-xhr
  • unoptimized-images
  • unsized-media
  • usb
  • vertical-scroll
  • vr

The <allowlist> details how the feature can be used — if at all — and takes one or more of the following values.

  • *
    The most liberal policy, stating that the feature will be allowed in this document, and any iframes whether from this domain or elsewhere. May only be used as a single value as it makes no sense to enable everything and also pass in a list of domains, for example.
  • self
    The feature will be available in the document and any iframes, however, the iframes must have the same origin.
  • src
    Only applicable when using an iframe allow attribute. This allows a feature as long as the document loaded into it comes from the same origin as the URL in the iframe’s src attribute.
  • none
    Disables the feature for the document and any nested iframes. May only be used as a single value.
  • <origin(s)>
    The feature is allowed for specific origins; this means that you can specify a list of domains where the feature is allowed. The list of domains is space separated.

There are two methods by which you can enable feature policies on your site: You can send an HTTP Header, or use the allow attribute on an iframe.

HTTP Header

Sending an HTTP Header means that you can enable a feature policy for the page or entire site setting that header, and also anything embedded in the site. Headers can be set for your entire site at the web server or can be sent from your application.

For example, if I wanted to prevent the use of the geolocation API and I was using the NGINX web server, I could edit the configuration files for my site in NGINX to add the following header, which would prevent any document in my site and any iframe embedded in from using the geolocation API.

add_header Feature-Policy "geolocation none;";

Multiple policies can be set in a single header. To prevent geolocation and vibrate but allow unsized-media from the domain example.com I could set the following:

add_header Feature-Policy "vibrate none; geolocation none; unsized-media http://example.com;";

The allow Attribute On iFrames

If we are primarily concerned with what happens with the content in an iframe, we can use Feature Policy on the iframe itself; this benefits from slightly better browser support at the time of writing with Chrome and Safari supporting this use.

If I am embedding a site and do not want that site to use geolocation, camera or microphone APIs then my iframe would look like the following example:

<iframe allow="geolocation 'none'; camera 'none'; microphone 'none'">

You may already be familiar with the individual attributes which control the content of iframes allowfullscreen, allowpaymentrequest, and allowusermedia. These can be replaced by the Feature Policy allow attribute, and for browser compatibility reasons you can use both on an iframe. If you do use both attributes, then the most restrictive one will apply. The Google article shows an example of an iframe that uses allowfullscreen — meaning that the iframe is allowed to enter fullscreen, but then a conflicting Feature Policy of fullscreen none. These conflict, so the most restrictive policy wins and this iframe would not be allowed to enter fullscreen.

<iframe allowfullscreen allow="fullscreen 'none'" src="...">

The iframe element also has a sandbox attribute designed to manage support for many features. This feature was also added to Content Security Policy with a sandbox value which disables all sandbox features, which can then be opted back into selectively. There is some crossover between sandbox features and those controlled by Feature Policy, and Feature Policy does not seek to duplicate those values already covered by sandbox. It does, however, address some of the limitations of sandbox by taking a more fine-grained approach to managing these policies, rather than one of turning everything off globally as one large policy set.

Feature Policy And Reporting API

Feature Policy violations can be reported via the Reporting API, which means that you could develop a comprehensive set of policies tracking feature usage across your site. This would be completely transparent to your users but give you a huge amount of information about how features were being used.

Browser Support For Feature Policy

Currently, browser support for Feature Policy is limited to Chrome, however, in many cases where you are using Feature Policy during development and when previewing sites this is not necessarily a problem.

Many of the use cases I will outline below are usable right now, without causing any impact to site visitors who are using browsers without support.

When To Use Feature Policy

I really like the idea of being able to use Feature Policy to help back up decisions made when developing the site. Decisions which may well be written up in documents such as a performance budget, or as part of a GDPR audit, but which then become something we have to remember to preserve through the life of the site. This is not always easy when multiple people work on a site; people who perhaps weren’t involved during that initial decision making, or may simply be unaware of the requirements. We think a lot about third parties managing to somehow impact our site, however, sometimes our sites need protecting from ourselves!

Keeping An Eye On Third Parties

You could prevent a third-party site from accessing the camera or microphone using a feature policy on the iframe with the allow attribute. If the reason for embedding that site has nothing to do with those features, then disabling them means that the site can never start to ask for those. This could then be linked with your processes for ensuring GDPR compliance. As you audit the privacy impact of your site, you can build in processes for locking down the access of third parties by way of feature policy — giving you and your visitors additional security and peace of mind.

This usage does rely on browser support for Feature Policy to block the usage. However, you could use Feature Policy reporting mode to inform you of usage of these APIs if the third party changed what they would be doing. This would give you a very quick heads-up — essentially as soon as the first person using Chrome hits the site.

Selectively Enabling Features

We also might want to selectively enable some features which are normally blocked. Perhaps we wish to allow an iframe loading content from another site to use the geolocation feature in the browser. Chrome by default blocks this, but if you are loading content from a trusted site you could enable the cross-origin request using Feature Policy. This means that you can safely turn on features when loading content from another domain that is under your control.

Catching Use Of Outdated APIs And Poorly Performing Features

Feature Policy can be run in a report-only mode. It can then track usage of certain features and let you know when they are found on the site. This can be useful in many scenarios. If you have a very large site with a lot of legacy code, enabling Feature Policy would help you to track down the places that need attention. If you work with a large team (especially if developers often pull in some third party libraries of code), Feature Policy can catch things that you would rather not see on the site.

Dealing With Poorly Optimized Images

While most of the articles I’ve seen about Feature Policy concentrate on the security and privacy aspects, the features around image optimization really appealed to me, as someone who deals with a lot of content generated by technical and non-technical users. Feature Policy can be used to help protect the user experience as well as the performance of your site by preventing overly large — or unoptimized images — being downloaded by visitors.

In an ideal world, your CMS would deal with image management, ensuring that images were sensibly resized, optimized for the web and the context they will be displayed in. Real life is rarely that ideal world, however, and so sometimes the job of resizing and optimizing images is left to content editors to ensure they are not uploading huge images to the web. This is particularly an issue if you are using a static CMS with no content management layer on top of it. Even as a technical person, it is very easy to forget to resize that giant screenshot or camera image you popped into a folder as a placeholder.

Currently behind a flag in Chrome are features which can help. The idea behind these features is to highlight the problematic images so that they can be fixed — without completely breaking the site.

The unsized-media feature policy looks for images or video which do not have a size set in the HTML or CSS. When an unsized media element loads, it can cause the content on the page to reflow.

In order to prevent any unsized media being added to the site, set the following header. Media will then be displayed with a default size of 300×150 pixels. You will see your site loading with small media, and realize you have a problem to fix.

Feature-Policy: unsized-media 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The oversized-images feature policy checks to see that images are not much large than their container. If they are, a placeholder will be shown instead. This policy is incredibly useful to check that you are not sending huge desktop images to your mobile users.

Feature-Policy: oversized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The unoptimized-images feature policy checks to see if the data size of images in bytes is no more than 0.5x bigger than its rendering area in pixels. If this policy is enabled and images violate it, a placeholder will be shown instead of the image.

Feature-Policy: unoptimized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

Testing And Reporting On Feature Policy

Chrome DevTools will display a message to inform you that certain features have been blocked or enabled by a Feature Policy. If you have enabled Feature Policy on your site, you can check that this is working.

Support for Feature Policy has also been added to the Security Headers site, which means you can check for these along with headers such as Content Security Policy on your site — or other sites on the web.

There is a Chrome DevTools Extension which lets you toggle on and off different Feature Policies (also a great way to check your pages without needing to configure any headers).

If you would like to get into integrating your Feature Policies with the Reporting API, then there is further information in terms of how to do this here.

Further Reading And Resources

I have found a number of resources, many of which I used when researching this article. These should give you all that you need to begin implementing Feature Policy in your own applications. If you are already using Content Security Policy, this seems an additional logical step towards controlling the way your site works with the browser to help ensure the security and privacy of people using your site. You have the added bonus of being able to use Feature Policy to help you keep on top of performance-damaging elements being added to your site over time.

Smashing Editorial
(il)

Source: Smashing Magazine, Protecting Your Site With Feature Policy

Introducing Float.com: A Better Alternative To Spreadsheets

dreamt up by webguru in Uncategorized | Comments Off on Introducing Float.com: A Better Alternative To Spreadsheets

Introducing Float.com: A Better Alternative To Spreadsheets

Introducing Float.com: A Better Alternative To Spreadsheets

Nick Babich



(This is a sponsored post.) In today’s highly competitive market, it’s vital to move fast. After all, we all know how the famous saying goes: “Time is money.” The faster your product team moves when creating a product, the higher the chance it’ll succeed in the market. If you are a project manager, you need a tool that helps you get the most out of each of your team member’s time.

Though creative teams typically work on new and innovative products, many still use legacy tools to manage their work. Spreadsheets are one of the most common tools in the project manager’s toolbox. While this might be adequate for a team of two or three members, as a team grows, managing the team’s time becomes a demanding job.

Whenever project managers try to manage their team using spreadsheets alone, they usually face the following problems:

1. Lack Of Glanceability

Understanding what’s really happening on a project takes a lot of work. It’s hard to visually grasp who’s busy and who’s not. As a result, some team members might end up overloaded, while others will have too little to do. You also won’t get a clear breakdown of how much time is being devoted to particular work and particular clients, which is crucial not just for billing purposes but also to inform your agency’s future decisions, like who to hire next.

2. Hard To Report To Stakeholders

With spreadsheets as the home of project management, translating data about people and time into tangible insights is a challenge. Data visualization is also virtually impossible with the limited range of chart-building functions that spreadsheet tools provide. As a result, reporting with spreadsheets becomes a time-consuming task. The more people and activities a project has, the more of a project manager’s time will be consumed by reporting.

3. Spreadsheets Manage Tasks, Not People

Managing projects with individual spreadsheets is a disaster waiting to happen. Though a single spreadsheet may give a clear breakdown of a single project, it has no way of indicating overload and underload for particular team members across all projects.

4. Lack Of High-Level Overview Of A Project

It’s well known in the industry that many designers suffer from tunnel vision: Without the big picture of a project in mind, their focus turns to solving ongoing tasks. But the problem of tunnel vision isn’t limited to designers; it also exists in project management.

When a project manager uses a tool like a spreadsheet, it is usually hard (or even impossible) to create a high-level overview of a project. Even when the project manager invests a lot of time in creating this overview, the picture becomes outdated as soon as the company’s direction shifts.

5. The Risk Of Outdated Information

Product development moves quickly, making it a struggle for project managers to continually monitor and introduce all required changes into the spreadsheet. Not only is this time-consuming, but it’s also not much fun — and, as a result, often doesn’t get done.

How Technology Makes Our Life Better: Introducing Float

The purpose of technology has always been to reduce mechanical work in order to focus on innovation. In a creative or product agency, the ultimate goal is to automate as much of the product design and human management process as possible, so that the team can focus on the deep work of creativity and execution.

With this in mind, let’s explore Float, a resource management app for creative agencies. Float makes it easier to understand who’s working on what, becoming a single source of truth for your entire team.

Helping Product Managers Overcome The Challenges Of Time And People Management

Float facilitates team collaboration and makes work more effective. Here’s how:

1. Visual Team Planner: See Team And Tasks At A Glance

Float allows you to plan tasks visually using the “Schedule” tab. In this view, you can allocate and update project assignments. A drag-and-drop interface makes scheduling your team simple.

Here’s an example of the Schedule View:


The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split.
The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split. (Large preview)

But that’s not all. You can do more:

  • Prioritize tasks
    You can do this by simply moving them on top of others.
  • Duplicate tasks
    Simply press “Shift” and drag a selected task to a new location.

You can prioritize tasks, duplicate them, extend or even split a task with Float.
(Large preview)
  • Extend a task
    If someone on your team needs extra time to finish a task, you can extend the time in one click using the visual interface.
  • Split a task
    When it becomes evident that someone on your team needs help with a task, it’s easy to split the task into parts and assign each part to another member.

2. Built-In Reporting And Statistics

Float’s built-in reporting and statistics feature (“utilization” reports) can save project managers hours of manual work at the end of the week, month or quarter.


Float helps you keep track of all of a project’s hours.
Float helps you keep track of all of a project’s hours. (Large preview)

It’s relatively easy to customize the format of the report. You can:

  • Choose the roles of team members (employees, contractors, all) in “People”;
  • Choose the type of tasks (tentative, confirmed, completed) in “Task”.

It’s vital to mention that Float allows you to define different roles for team members. For example, if someone works part-time, you can define this in the “Profile” settings and specify that their work is part-time only. This feature also comes handy when you need to calculate a budget.

Emailing team members their individual hours for the week
(Large preview)

You can email team members their individual hours for the week.

3. Search And Filter: All The Information You Need, At Your Fingertips

All of your team’s information sits in one place. You don’t need to use different spreadsheets for different projects; to add a new project, simply click on “Projects” and add a new object.

You can also use the filtering tools to highlight specific projects. A simple way to see relevant tasks at a glance is to filter by tags, leaving only particular types of projects (for example, projects that have contractors).


Customizing the format of the report
(Large preview)

4. Getting A High-Level Overview Of All Your Projects

With Float, you’ll always have a record of what happened and when.
– Set milestones in a project’s settings, and keep an eye on upcoming milestones in your Activity Feed.


Set milestones for your project.
With Float, you can set milestones for your project. (Large preview)
  • Drill down into a user or a team and review their schedule. When you see that someone is overscheduled (red will tell you that), you can move other team members to that activity and regroup them.

Float helps you keep track of all of a project’s hours.
Float helps you keep track of all of a project’s hours. (Large preview)

You can view by day, week or month, making it easy to zoom out and see a big picture of your project.


You can zoom in for a detailed day view, or zoom out to view a monthly forecast.
Zoom in for a detailed day view, or zoom out to view a monthly forecast. (Large preview)

5. Reducing The Risk Of Outdated Information

You don’t need to switch between multiple tools to find out everything you need to know. This means your information will be up to date (it’s much easier to manage your project using a single tool, rather than many).

Float can be easily connected to all the services you use.


Using the filtering tools to highlight specific projects
“Schedule”, “People” and “Projects Tools” work together to create context. (Large preview)

Conclusion

If your team’s project management leaves a lot to be desired, Float is a no-brainer. Float makes it easy to see everything you need to know about your team’s projects in a single place, giving project managers the information and functionality they need to handle the fast-paced world of digital design and development. Get started with Float today!

Smashing Editorial
(ms, ra, al, il)

Source: Smashing Magazine, Introducing Float.com: A Better Alternative To Spreadsheets