Collective #508

Front-end Developer Handbook 2019 Cody Lindley wrote this guide that outlines and discusses the practice of front-end engineering, how to learn it and what tools are used when practicing it in 2019. Check it out Our Sponsor Visual Composer founder had an AMA Read more

Optimizing Performance With Resource Hints

dreamt up by webguru in Uncategorized | Comments Off on Optimizing Performance With Resource Hints

Optimizing Performance With Resource Hints

Optimizing Performance With Resource Hints

Drew McLellan



Modern web browsers use all sorts of techniques to help improve page load performance by guessing what the user may be likely to do next. The browser doesn’t know much about our site or application as a whole, though, and often the best insights about what a user may be likely to do come from us, the developer.

Take the example of paginated content, like a photo album. We know that if the user is looking at a photo in an album, the chance of them clicking the ‘next’ link to view the following image in the album is pretty high. The browser, however, doesn’t really know that of all the links on the page, that’s the one the user is most likely to click. To a browser, all those links carry equal weight.

What if the browser could somehow know where the user was going next and could fetch the next page ahead of time so that when the user clicks the link the page load is much, much faster? That’s in principal what Resource Hints are. They’re a way for the developer to tell the browser about what’s likely to happen in the future so that the browser can factor that into its choices for how it loads resources.

All these resource hints use the rel attribute of the <link> element that you’ll be familiar with finding in the <head> of your HTML documents. In this article we’ll take a look at the main types of Resource Hints and when and where we can use them in our pages. We’ll go from the small and subtle hints through to the big guns at the end.

DNS Prefetching

A DNS lookup is the process of turning a human-friendly domain name like example.com into the machine-friendly IP address like 123.54.92.4 that is actually needed in order to fetch a resource.

Every time you type a URL in the browser address bar, follow a link in a page or even load a resource like an image from a different domain, the browser needs to do a DNS lookup to find the server that holds the resource we’ve requested. For a busy page with lots of external resources (like perhaps a news website with loads of ads and trackers), there might be dozens of DNS lookups required per page.

The browser caches the results of these lookups, but they can be slow. One performance optimization technique is to reduce the number of DNS lookups required by organizing resources onto fewer domains. When that’s not possible, you can warn the browser about the domains it’s going to need to look up with the dns-prefetch resource hint.

<link rel="dns-prefetch" href="https://images.example.com">

When the browser encounters this hint, it can start resolving the images.example.com domain name as soon as possible, even though it doesn’t know how it’ll be used yet. This enables the browser to get ahead of the game and do more work in parallel, decreasing the overall load time.

When Should I Use dns-prefetch?

Use dns-prefetch when your page uses resources from a different domain, to give the browser a head start. Browser support is really great, but if a browser doesn’t support it then no harm done — the prefetch just doesn’t happen.

Don’t prefetch any domains you’re not using, and if you find yourself wanting to prefetch a large number of domains you might be better to look at why all those domains are needed and if anything can be done to reduce the number.

Preconnecting

One step on from DNS prefetching is preconnecting to a server. Establishing a connection to a server hosting a resource takes several steps:

  • DNS lookup (as we’ve just discussed);
  • TCP handshake
    A brief “conversation” between the browser and server to create the connection.
  • TLS negotiation on HTTPS sites
    This verifies that the certificate information is valid and correct for the connection.

This typically happens once per server and takes up valuable time — especially if the server is very distant from the browser and network latency is high. (This is where globally distributed CDNs really help!) In the same way that prefetching DNS can help the browser get ahead of the game before it sees what’s coming, pre-connecting to a server can make sure that when the browser gets to the part of the page where a resource is needed, the slow work of establishing the connection has already taken place and the line is open and ready to go.

<link rel="preconnect" href="https://scripts.example.com">

When Should I Use preconnect?

Again, browser support is strong and there’s no harm if a browser doesn’t support preconnecting — the result will be just as it was before. Consider using preconnect when you know for sure that you’re going to be accessing a resource and you want get ahead.

Be careful not to preconnect and then not use the connection, as this will both slow your page down and tie up a tiny amount of resource on the server you connect to too.

Prefetching The Next Page

The two hints we’ve looked at so far are primarily focussed on resources within the page being loaded. They might be useful to help the browser get ahead on images, scripts or fonts, for example. The next couple of hints are concerned more with navigation and predicting where the user might go after the page that’s currently being loaded.

The first of these is prefetching, and its link tag might look like this:

<link rel="prefetch" href="https://example.com/news/?page=2" as="document">

This tells the browser that it can go ahead and fetch a page in the background so that it’s ready to go when requested. There’s a bit of a gamble here because you have to preempt where you think the user will navigate next. Get it right, and the next page might appear to load really quickly. Get it wrong, and you’ve wasted time and resources in downloading something that isn’t going to be used. If the user is on a metered connection like a limited mobile phone plan, you might actually cost them real money.

When a prefetch takes place, the browser does the DNS lookup and makes the server connection we’ve seen in the previous two types of hint, but then goes a step further and actually requests and downloads the files. It stops at that point, however, and the files are not parsed or executed and they are in no way applied to the current page. They’re just requested and kept ready.

You might think of a prefetch as being a bit like adding a file to the browser’s cache. Instead of needing to go out to the server and download it when the user clicks the link, the file can be pulled out of memory and used much quicker.

The as Attribute

In the example above, you can see that we’re setting the as attribute to as="document". This is an optional attribute that tells that browser that what we’re fetching should be handled as a document (i.e. a web page). This enables the browser to set the same sort of request headers and security policies as if we’d just followed a link to a new page.

There are lots of possible values for the as attribute by enabling the browser to handle different types of prefetch in the appropriate way.

Value of as Type of resource
audio Sound and music files
video Video
Track Video or audio WebVTT tracks
script JavaScript files
style CSS style sheets
font Web fonts
image Images
fetch XHR and Fetch API requests
worker Web workers
embed Multimedia requests
object Multimedia <object> requests
document Web pages

The different values that can be used to specify resource types with the as attribute.

When Should I Use prefetch?

Again prefetch has great browser support. You should use it when you have a reasonable amount of certainty of the user might follow through your site if you believe that speeding up the navigation will positively impact the user experience. This should be weighed against the risk of wasting resources by possibly fetching a resource that isn’t then used.

Prerendering The Next Page

With prefetch, we’ve seen how the browser can download the files in the background ready to use, but also noted that nothing more was done with them at that point. Prerendering goes one step further and executes the files, doing pretty much all the work required to display the page except for actually displaying it.

This might include parsing the resource for any subresources such as JavaScript files and images and prefetching those as well.

<link rel="prerender" href="https://example.com/news/?page=2" as="document">

This really can make the following page load feel instantaneous, with the sort of snappy load performance you might see when hitting your browser’s back button. The gamble is even greater here, however, as you’re not only spending time requesting and downloading the files, but executing them along with any JavaScript and such. This could use up memory and CPU (and therefore battery) that the user won’t see the benefit for if they end up not requesting the page.

When Should I Use prerender?

Browser support for prerender is currently very restricted, with really only Chrome and old IE (not Edge) offering support for the option. This might limit its usefulness unless you happen to be specifically targeting Chrome. Again it’s a case of “no harm, no foul” as the user won’t see the benefit but it won’t cause any issues for them if not.

Putting Resource Hints To Use

We’ve already seen how resource hints can be used in the <head> of an HTML document using the <link> tag. That’s probably the most convenient way to do it, but you can also achieve the same with the Link: HTTP header.

For example, to prefetch with an HTTP header:

Link: <https://example.com/logo.png>; rel=prefetch; as=image;

You can also use JavaScript to dynamically apply resource hints, perhaps in response to use interaction. To use an example from the W3C spec document:

var hint  = document.createElement("link");
hint.rel  = "prefetch";
hint.as   = "document";
hint.href = "/article/part3.html";
document.head.appendChild(hint);

This opens up some interesting possibilities, as it’s potentially easier to predict where the user might navigate next based on how they interact with the page.

Things To Note

We’ve looked at four progressively more aggressive ways of preloading resources, from the lightest touch of just resolving DNS through to rending a complete page ready to go in the background. It’s important to remember that these hints are just that; they’re hints of ways the browser could optimize performance. They’re not directives. The browser can take our suggestions and use its best judgement in deciding how to respond.

This might mean that on a busy or overstretched device, the browser doesn’t attempt to respond to the hints at all. If the browser knows it’s on a metered connection, it might prefetch DNS but not entire resources, for example. If memory is low, the browser might again decide that it’s not worth fetching the next page until the current one has been offloaded.

The reality is that on a desktop browser, the hints will likely all be followed as the developer suggests, but be aware that it’s up to the browser in every case.

The Importance Of Maintenance

If you’ve worked with the web for more than a couple of years, you’ll be familiar with the fact that if something on a page is unseen then it can often become neglected. Hidden metadata (such as page descriptions) is a good example of this. If the people looking after the site can’t readily see the data, then it can easily become neglected and drift out of date.

This is a real risk with resource hints. As the code is hidden away and goes pretty much undetected in use, it would be very easy for the page to change and any resource hints to go un-updated. The consequence of say, prefetching a page that you don’t use, means that the tools you’ve put in place to improve the performances of your site are then actively harming it. So having good procedures in place to key your resource hints up to date becomes really, really important.

Resources

Smashing Editorial
(il)

Source: Smashing Magazine, Optimizing Performance With Resource Hints

The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes

dreamt up by webguru in Uncategorized | Comments Off on The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes

The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes

The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes

John Rhea



Every user interaction with your website is part of a story. The user—the hero—finds themselves on a journey through your website on the way to their goal. If you can see this journey from their perspective, you can better understand what they need at each step, and align your goals with theirs.

My first article on websites and story structure, Once Upon a Time: Using Story Structure for Better Engagement, goes into more depth on story structure (the frame around which we build the house of a story) and how it works. But here’s a quick refresher before we jump into implications:

The Hero’s Journey

Most stories follow a simple structure that Joseph Campbell in his seminal work, Hero with a Thousand Faces, called the Hero’s Journey. We’ll simplify it to a hybrid of the plot structure you learned in high school and the Hero’s Journey. We’ll then take that and apply it to how a user interacts with a website.

The Hero’s journey begins in the ordinary world. An inciting incident happens to draw the hero into the story. The hero prepares to face the ordeal/climax. The hero actually faces the ordeal. Then the hero must return to the ordinary world, his problem solved by the story.

Once upon a time… a hero went on a journey. (Large preview)
Ordinary World The ordinary world is where the user starts (their every day) before they’ve met your website.
Inciting Incident/Call To Adventure Near the beginning of any story, something will happen to the hero that will push (or pull) them into the story (the inciting incident/call to adventure). It will give them a problem they need to resolve. Similarly, a user has a problem they need to be solved, and your website might be just the thing to solve it. Sometimes though, a hero would rather stay in their safe, ordinary world. It’s too much cognitive trouble for the user to check out a new site. But their problem — their call to adventure — will not be ignored. It will drive the user into interacting with your site.
Preparation/Rising Action They’ve found your website and they think it might work to solve their problem, but they need to gather information and prepare to make a final decision.
The Ordeal/Climax In stories, the ordeal is usually the fight with the big monster, but here it’s the fight to decide to use your site. Whether they love the video game news you cover or need the pen you sell or believe in the graphic design prowess of your agency, they have to make the choice to engage.
The Road Back/Falling Action Having made the decision to engage, the road back is about moving forward with that purchase, regular reading, or requesting the quote.
Resolution Where they apply your website to their problem and their problem is *mightily* solved!
Return With Elixir The user returns to the ordinary world and tells everyone about their heroic journey.

The User’s Perspective

Seeing the website from the user’s perspective is the most important part of this. The Hero’s Journey, as I use it, is a framework for better understanding your user and their state of mind during any given interaction on your site. If you understand where they are in their story, you can get a clearer picture of where and how you fit in (or don’t) to their story. Knowing where you are or how you can change your relationship to the user will make a world of difference in how you run your website, email campaigns, and any other interaction you have with them.

Numerous unsubscribes might not be a rejection of the product, but that you sent too many emails without enough value. Great testimonials that don’t drive engagement might be too vague or focused on how great you are, not what solutions you solve. A high bounce rate on your sign up page might be because you focused more on your goals and not enough on your users’ goals. Your greatest fans might not be talking about you to their friends, not because they don’t like you, but because you haven’t given them the opportunity for or incentivized the sharing. Let’s look at a few of these problems.

Plan For The Refusal Of The Call To Adventure

Often the hero doesn’t want to engage in the story or the user doesn’t want to expend the cognitive energy to look at another new site. But your user has come to your site because of their call to adventure—the problem that has pushed them to seek a solution—even if they don’t want to. If you can plan for a user’s initial rejection of you and your site, you’ll be ready to counteract it and mollify their concerns.

Follow up or reminder emails are one way to help the user engage. This is not a license to stuff your content down someone’s throat. But if we know that one or even seven user touches aren’t enough to draw someone in and engage them with your site, you can create two or eight or thirty-seven user touches.

Sometimes these touches need to happen outside of your website; you need to reach out to users rather than wait for them to come back to you. One important thing here, though, is not to send the same email thirty-seven times. The user already refused that first touch. The story’s hero rarely gets pulled into the story by the same thing that happens again, but rather the same bare facts looked at differently.

So vary your approach. Do email, social media, advertising, reward/referral programs, and so on. Or use the same medium with a different take on the same bare facts and/or new information that builds on the previous touches. Above all, though, ensure every touch has value. If it doesn’t, each additional touch will get more annoying and the user will reject your call forever.

Nick Stephenson is an author who tries to help other authors sell more books. He has a course called Your First 10K Readers and recently launched a campaign with the overall purpose of getting people to register for the course.

Before he opened registration, though, he sent a series of emails. The first was a thanks-for-signing-up-to-the-email-list-and-here’s-a-helpful-case-study email. He also said he would send you the first video in a three-part series in about five minutes. The second email came six minutes later and had a summary of what’s covered in the video and a link to the video itself. The next day he emailed with a personal story about his own struggles and a link to an article on why authors fail (something authors are very concerned about). Day 3 saw email number four… you know what? Let’s just make a chart.

Day Value/Purpose Email #
1 Case Study 1
1 Video 1 of 3 2
2 Personal Story and Why Authors Fail Article 3
3 Video 2 of 3 4
4 Honest discussion of his author revenue and a relevant podcast episode 5
5 Video 3 of 3 6
6 Testimonial Video 7
7 Registration Opens Tomorrow 8
8 Registration Info and a pitch on how working for yourself is awesome 9

By this point, he’s hooked a lot of users. They’ve had a week of high quality, free content related to their concerns. He’s paid it forward and now they can take it to the next level.

I’m sure some people unsubscribed, but I’m also sure a lot more people will be buying his course than with one or even two emails. He’s given you every opportunity to refuse the call and done eight different emails with resources in various formats to pull you back in and get you going on the journey.

I’ve Traveled This Road Before

It takes a lot less work to follow a path than to strike a new one. If you have testimonials, they can be signposts in the wilderness. While some of them can and should refer to the ordeal (things that might prevent a user from engaging with you), the majority of them should refer to how the product/website/thing will solve whatever problem the user set out to solve.

“This article was amazing!” says the author’s mother, or “I’m so proud of how he turned out… it was touch-and-go there for a while,” says the author’s father. While these are positive (mostly), they aren’t helpful. They tell you nothing about the article.

Testimonials should talk about the road traveled: “This article was awesome because it helped me see where we were going wrong, how to correct course, and how to blow our competitor out of the water,” says the author’s competitor. The testimonials can connect with the user where they are and show them how the story unfolded.

This testimonial for ChowNow talks about where they’ve been and why ChowNow worked better than their previous setup.

“Life before ChowNow was very chaotic — we got a lot of phone calls, a lot of mistyped orders. So with ChowNow, the ability to see the order from the customer makes it so streamlined.” John Sungkamee, Owner, Emporium Thai Cuisine

“I struggled with the same things you did, but this website helped me through.” (Large preview)

So often we hear a big promise in testimonials. “Five stars”, “best film of the year,” or “my son always does great.” But they don’t give us any idea of what it took to get where they are, that special world the testifier now lives in. And, even if that company isn’t selling a scam, your results will vary.

We want to trumpet our best clients, but we also want to ground those promises in unasterisked language. If we don’t, the user’s ordeal may be dealing with our broken promises, picking up the pieces and beginning their search all over again.

The Ordeal Is Not Their Goal

While you need to help users solve any problems preventing them from choosing you in their ordeal, the real goal is for them to have their problem solved. It’s easy to get these confused because your ordeal in your story is getting the user to buy in and engage with your site.

Your goal is for them to buy in/engage and your ordeal is getting them to do that. Their goal is having their problem solved and their ordeal is choosing you to solve that problem. If you conflate your goal and their goal then their problem won’t get solved and they won’t have a good experience with you.

This crops up whenever you push sales and profits over customer happiness. Andrew Mason, founder of Groupon, in his interview with Alex Bloomberg on the podcast “Without Fail”, discusses some of his regrets about his time at Groupon. The company started out with a one-deal-a-day email — something he felt was a core of the business. But under pressure to meet the growth numbers their investors wanted (their company goals), they tried things that weren’t in line with the customer’s goals.

Note: I have edited the below for length and clarity. The relevant section of the interview starts at about 29:10.

Alex: “There was one other part of this [resignation] letter that says, ‘my biggest regrets are the moments that I let a lack of data override my intuition on what’s best for our company’s customers.’ What did you mean by that?”

Andrew: “Groupon started out with these really tight principles about how the site was going to work and really being pro customer. As we expanded and as we went after growth at various points, people in the company would say, ‘hey why don’t we try running two deals a day? Why don’t we start sending two emails a day?’ And I think that sounds awful, like who wants that? Who wants to get two emails every single day from a company? And they’d be like, ‘Well sure, it sounds awful to you. But we’re a data driven company. Why don’t we let the data decide?’ …And we’d do a test and it would show that maybe people would unsubscribe at a slightly higher rate but the increase in purchasing would more than make up for it. You’d get in a situation like: ‘OK, I guess we can do this. It doesn’t feel right, but it does seem like a rational decision.’ …And of course the problem was when you’re in hypergrowth like [we were] …you don’t have time to see what is going to happen to the data in the long term. The churn caught up with [us]. And people unsubscribed at higher rates and then before [we] knew it, the service had turned into… a vestige of what it once was.”

Without Fail, Groupon’s Andrew Mason: Pt. 2, The Fall (Oct. 8, 2018)

Tools For The Return With The Elixir

Your users have been on a journey. They’ve conquered their ordeal and done what you hoped they would, purchased your product, consumed your content or otherwise engaged with you. These are your favorite people. They’re about to go back to their ordinary world, to where they came from. Right here at this pivot is when you want to give them tools to tell how awesome their experience was. Give them the opportunity to leave a testimonial or review, offer a friends-and-family discount, or to share your content.

SunTrust allows electronic check deposit through their mobile app. For a long while, right after a deposit, they would ask you if you wanted to rate their app. That’s the best time to ask. The user has just put money in their account and are feeling the best they can while using the app.

suntrust app check deposit screen

“Money, money, money! Review us please?” (Large preview)

The only problem was is that they asked you after every deposit. So if you had twelve checks to put in after your birthday, they’d ask you every time. By the third check number, this was rage inducing and I’m certain they got negative reviews. They asked at the right time, but pestered rather than nudged and — harkening back to the refusal of the call section — they didn’t vary their approach or provide value with each user touch.

Note: Suntrust has since, thankfully, changed this behavior and no longer requests a rating after every deposit.

Whatever issue you’re trying to solve, the Hero’s Journey helps you see through your user’s eyes. You’ll better understand their triumphs and pain and be ready to take your user interactions to the next level.

So get out there, put on some user shoes, and make your users heroic!

Smashing Editorial
(cc, il)

Source: Smashing Magazine, The User’s Perspective: Using Story Structure To Stand In Your User’s Shoes

Collective #508

dreamt up by webguru in Uncategorized | Comments Off on Collective #508




C508_CodeGuppy

CodeGuppy

A place where kids and adults alike learn JavaScript coding through fun and easy to follow tutorials.

Check it out



C508_typora

Typora

A very minimal Markdown editor with a seamless writing and reading experience.

Check it out















C508_editor

Editor

In case you didn’t know about it: a real-time, responsive HTML/CSS/JS code editor.

Check it out






Collective #508 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #508

Design At Scale: One Year With Figma

dreamt up by webguru in Uncategorized | Comments Off on Design At Scale: One Year With Figma

Design At Scale: One Year With Figma

Design At Scale: One Year With Figma

Paul Hanaoka



This article will be about how large teams can benefit from using more open, collaborative tooling and how to make adoption and migration feasible and pleasant. Also, in case you didn’t guess from the title of the article just yet, a lot of it will be about Figma and how we succeeded at adopting this design tool in our team.

The intended audience is experienced designers working in larger teams with design systems, developers or product managers looking to improve the way cross-functional teams work in their organization.

I’ve been using design tools in a professional setting for over ten years and am always trying to make teams I’m serving work more efficiently and more effectively. From scripting and actions in Photoshop, to widget libraries in Axure, to Sketch plugins, and now with Figma — I’ve helped design teams stay on the cutting edge without leaving developers or product managers behind.

Logos from products like Sketch, Principle, Invision, and more loosely tied together

The State of Design Tools 2015. (Large preview)

Basic knowledge of design systems and tools will be helpful, but not necessary as I hope to share specific examples and also “high level” concepts and methods that you can adapt for your team or context.

Our Design Workflow Circa 2015

Our primary tool in 2015 was Sketch, and that’s pretty much where the commonalities stopped. We all had different methods of prototyping, exporting, and sharing designs with stakeholders (InVision, Axure, Marvel, Google Slides, and even the antiquated Adobe PDF) and developers (Avocode, Zeplin, plugins without standalone apps like Measure). On rare occasions, we could send files directly to the engineers who were lucky enough to have the rare combination of a MacBook and a Sketch license.

When InVision released the Craft plugin, we were overjoyed — being able to prototype and upload screens from Sketch into InVision, sharing components and styles in nascent libraries across files — it was the designer’s dream come true.

A variety of screens in Liferay’s 'My Projects' InVision dashboard

A peek into our InVision projects. (Large preview)

Eventually, we all converged on the InVision platform. We created and documented the processes that helped reduce much of the friction in stakeholder collaboration and developer handoff. Yet, due to the complex permissions structure, InVision remained a closed ecosystem — if you weren’t a designer, there was an approval chain that made it difficult to get an InVision account, and once you got an account, you had to be added to the right groups.

Manually managing versions and files, storing and organizing them in a shared drive, and dealing with sync conflicts were just a few of the things that caused us many headaches.

A screenshot from Figma’s 'Getting Started' video on YouTube

Getting Started in Figma. (Large preview)

Could we really have an all-in-one tool that had all the best features of Sketch and InVision, with the real-time collaboration and communication features found in Google Docs? In addition to reducing overhead from context switching, we could also potentially simplify from three tool subscriptions (for mockups, prototyping, and developer handoff), to only one.

The Process

The first designers from our team to adopt Figma started experimenting with it when the first Figma beta was released in 2016. The features were limited but covered 80% of what we needed. Sketch import was buggy, but we still found immense value in being able to collaborate in real-time and most importantly, we could do 90% of the design work for a project inside a single tool. Stakeholder feedback, revisions, and developer handoff improved exponentially.

By 2017, we had a few designers using it for most of their work, and one of the Lexicon designers (Liferay’s design system), Emiliano Cicero, was quickly becoming an evangelist — which turned out to be a key factor in convincing the rest of the team to make the switch.

When Figma 2.0 debuted in the summer of 2017 with prototyping features added and huge improvements to the developer handoff capabilities, we knew this could be a viable tool for our global team. But how do you convince 20+ designers to abandon tools and workflows they love and have used comfortably for years?

I could write a series on that subject, but I’ll summarize by saying the two biggest things were: starting small, and creating a solid infrastructure.

Starting Small

In the fall of 2017, we started our first trial of Figma with a product team distributed between the United States and Brazil. We were fortunate to have a week-long kickoff together in our Los Angeles office. Designing flows and wireframes together in Figma was so much faster and more efficient. We were able to divide up tasks and share files and components without having to worry about constantly syncing a folder or a library.

At our global gathering in January 2018, we formulated a plan to slowly adopt Figma, using this team’s experiences to help form the infrastructure we’d need for the rest of the organization so that migration would be as seamless as possible.

The biggest challenge we faced was a tight deadline — it didn’t make any sense for us to rework our review and handoff process due to the scale of the project with multiple engineering teams and product managers distributed around the world. Even though the end result would have been better, the timing wasn’t right. Another factor was Figma’s lack of a reliable offline design experience (more on that later), and for these reasons, the team decided to use Sketch and Figma for wireframes and mockups, but any prototyping or review had to be done in InVision.

 A slide with about Liferay’s Digital Asset Management structure

A DAM presentation from Design Week 2018. (Large preview)

Creating a solid Figma structure

One of the first steps was formulating rough guidelines for the project, file, and component organization. The foundation for these things was started by two junior (at the time) designers, Abel Hancock and Naoki Hisamoto, who never developed the bad layer-naming habits that seem to come from designers who cut their teeth in Photoshop. This method for organization, coupled with a year spent developing a small library of components for Liferay.com properties, was critical to setting the rest of the global team up for success.

An early organizational improvement created by one of our Liferay.com designers, inspired by Ben’s tweet, was our system of covers.

A screenshot showing Liferay’s system for organizing Figma projects

Figma project covers, by Abel Hancock. (Large preview)

We’ve made this file available if you’d like to copy it, otherwise it’s a pretty straightforward hack:

  1. Create a single frame in the first page of your file that’s 620×320.
  2. Add your design. If you have text, we found that the minimum size is ~24, the titles in our examples are set at 48.
  3. Enjoy!

Note: There will always be a slight margin around your cover, but if you set the page canvas the same color as the card, it will reduce the appearance of this margin.

This helped transform our library, not just for designers, but for project and product managers and engineers who are trying to find things quickly. The search functionality was already really good, but the covers helped people narrow things down even faster, plus it allowed us to instantly communicate the status of any given file.

An animated image a Figma project before and after the cover system
Sparking Joy with Figma Covers (Large preview)

Prior to using Figma, in addition to a ‘Master’ design system Sketch file, most designers had base files they had developed over time with things like wireframing elements and basic components. As we coalesced into a single pattern, we started to combine everything and refined them into a single library. Since we were doing wireframes, mockups, and prototypes in Figma, we also started to abandon flow apps like Lucidchart, instead of making our own task flow components in Figma.

Other utilities that we developed over time were redline components for making precise handoff specs, sticky notes for affinity diagrams (and just about anything), and flow nodes.

A screenshot showing Liferay’s reusabile utility components for redlining, and creating flows and affinity diagrams

Liferay Design’s redline, flow, and affinity components. (Large preview)

One of the biggest benefits of doing this in Figma, was that improvements to any of these components that any designer made could easily be pulled into the library and then pushed out to all instances. Having this in a centralized place also makes maintenance a lot easier, as anyone on the team can contribute to improvements with a relatively simple process.

A redline document is for making it easier for the developer to know the dimensions, visual specs, and other properties of a UI component or a set of components. If you’re interested in the topic, you can also check Dmitriy Fabrikant’s article about design blueprints.

Some recommendations to keep in mind when creating components:

  1. Use of overrides and masters for powerful base components (more on that here);
  2. Establish a consistent pattern for naming (we use the atomic model);
  3. Document and label everything — especially layers.

With the advanced styling features released at the beginning of June 2017, the systems team finished a complete version of our Lexicon library in between our big product releases in July and the ramp-up in August. This was the final piece we needed to support the global team. Designers working in Marketing and other departments had already been using Figma for some time, but by last Fall almost all of the other product teams had finalized the move over to Figma.

As of today, most of the product designers are only using Figma, there are also a couple of designers that are working in legacy systems with lots of existing, complicated Sketch prototypes that aren’t worth importing to Figma. Another exception is a few designers that occasionally use apps like Principle or Adobe After Effects for more advanced animation that wouldn’t make sense to do in Figma. We even have a few designers exploring Framer X for even more robust prototypes, especially with work that requires leveraging any kind of data at scale. While there are some designers using multiple tools on a semi-regular basis, 80% of our product designers are using Figma for all of their design and prototyping work.

Continuous Improvements

We’re always working on ways to work more effectively, and one of the current things we’re iterating is best practices for naming pages. At first, we named pages according to the page name, but that proved problematic, plus, as we improved our libraries, the need for larger files with multiple pages was reduced.

Currently, we’re using a numbering system within files, with the top-most page being what’s delivered to the developers. The next phase we’re discussing nowadays is making the versions more meaningful with explicit labels (wireframes, mockups, breakpoints, etc.) and making better use of Figma’s built-in versioning, establishing best practices for when and how to save versions.

 Two screenshots showing different ways to name Figma pages

The evolution of page organization within a Figma file. (Large preview)

Final_Final_Last_2 — No More!

I generally hate to use the term ‘game-changer’, but when Figma released naming/annotating to the version history last March, it dramatically changed the way we organized our files. Previously, we all had different ways of saving iterations and versions.

Usually we would create new pages within a single file, sometimes with large files we would duplicate them and add a letter at the end of the filename to signal an iteration. If you were going to make drastic changes, then you might create a new file and append a version number. This was very natural, coming from the Photoshop/Sketch paradigm of managing multiple files for everything.

A screenshot showing what Figma’s version history timeline looks like

Version history timeline view (Large preview)

The ability to work, periodically pausing to name and annotate a point in time will be very familiar for anyone who has used a version control like Git before. You can even look at the whole file history, and go into past snapshots, pick one out and name and annotate it.

If you want to go back and revert to a past version, you can restore it and work on that file from that point in the history. The best part is that you didn’t lose any of the work because the version you ‘restored’ wasn’t deleting anything; it was simply copying that state and pasting it at the top.

A diagram showing how restoring past versions of a Figma file works

Git it? (Large preview)

In this illustration, the designer arrives at final 3.0 after restoring final 1.1, but the file version history is still completely visible and accessible.

In cases where you’re starting a new project, or want to make some really dramatic changes to the file, it can be necessary for you to ‘fork’ the file. Figma allows you to duplicate a file at any given point in the history, but it’s important to note that the file history will not be copied.

We’ve found that a good way to work in this versioned system is to use your file history in a similar way to how a developer uses git — think of a Figma version as a commit or pull-request, and name and annotate them as such. For more, smarter thoughts on this, I recommend Seth Robertson’s Commit Often, Perfect Later, Publish Once: Git Best Practices — this is a good general philosophy for how to work in a version-controlled ecosystem. Also, Chris Beams’s How to Write a Git Commit Message is a great guide to writing meaningful and useful notes as you work.

Some practical tips we’ve discovered:

  1. Keep titles to 25 characters or less.
    Longer titles are clipped and you have to double-click on the note in the version history to open up the ‘Edit Version Information’ modal to read it.
  2. Keep your description to 140 characters or less.
    The full description is always shown, so keeping it to the point helps keep the history readable.
  3. Use the imperative mood for the title.
    This gives the future you a clearer idea of what will happen when you click on that point in time, e.g. “changing button colors to blue” vs. “change buttons to blue.”
  4. Use the description to explain ‘what’ and ‘why’ versus ‘how’.
    Answering the ‘why’ is a critical part of any designers job, so this helps you focus on what’s important as you’re working as well as provide better information for you in the future.

Working Offline

Disclaimer: This is based on our own experience, and a lot of it is our best guess as to how it works.

As I mentioned before, offline support in Figma is tenuous. If you already have a file open before going offline, you can continue working on the file. It seems like each change you make is timestamped. In the case of someone else working on the same file while you were offline, then the latest change will be the one rendered once you do come back online.

A series of screenshots showing how offline editing works

When Cat came back online, her button position change was made, and merged with the Nerd’s color changes. (Large preview)

In this simple example, it doesn’t seem like too big of a deal — but in real life, this can get really messy, really fast. In addition to the high possibility of someone overriding your work, frames and groups could get stacked on top of one another.

Our workflow is to duplicate the page before (or after) going offline, and then do your work in that copy. That way it will be untouched when you come back online, and you can do any necessary merges manually.

“F” Is For Future

Adopting a new tool is never easy, but in the end, the benefits may far outweigh the costs.

The biggest areas of improvement our team has experienced are:

  • Collaboration
    It’s much easier to share our work and improvements with the team and community.
  • Transparency
    A system that is open by default is naturally more inclusive to people outside of the field of design.
  • Evolution
    Removing the “layers” between designers and engineers, enabling us to take the next step in design maturity.
  • Operations
    Adopting a single tool for wireframes, mockups, prototypes, and developer handoffs makes life easier for accounting, IT, and management.

Reducing the overall number of subscriptions was really helpful for our team, but as costs can vary from ‘free’ to over $500 per year this might not make sense for your specific context and needs. For a full breakdown, see Figma’s pricing page.

Grow And Get Better

Of course, no tool is perfect, and there’s always room for improvements. Some things that were missing from previous tools we used are:

  1. No plugin ecosystem.
    Sketch’s extensibility was a huge factor in making the switch from Photoshop a no-brainer. Figma does have a web API, but currently, there is no ‘write’ functionality. For now, Sketch remains the market leader with its vibrant community of extensions and plugins. (Of course, things might change in the future in case Figma opens the stage for plugin development as well.)
  2. Importing web, or JSON data in prototypes.
    It would be a lot easier for us to design with real data. Sketch recently introduced a “Data” feature in v.52, InVision’s Craft plugin is still very much the gold standard when it comes to easily addxing large amounts of different data — and for now, we’re stuck manually populating text fields.
  3. More motion.
    The Principle integration is nice (if you have Principle), but having basic animation and advanced prototyping features in Figma would be a lot better.
  4. A smoother offline experience
    As mentioned previously, as long as you have the Figma file open before going offline, you’re fine. This is probably OK for most people — but if you like to shut down your computer every night, it can be painful when you open it in the morning on a train or airplane and realize you forgot to leave Figma open.

Open-Source Design

A few months ago, the always controversial Dann Petty recently tweeted about developers having GitHub, photographers having Unsplash — but designers not having a platform for sharing things for free. Design Twitter™️ swooped in and he deleted his tweet before I could get a screengrab, but one thing I’d like to mention is that what we’re very passionate about at Liferay, is open source. To that end, we’ve created a Figma project for resources to share with the design community.

A screenshot of Liferay’s open source Figma project

Open sourced files from Liferay.Design. (Large preview)

To access any of these files, check out liferay.design/resources/figma, and stay tuned as we grow and share more!

Further Reading

Other Resources

Smashing Editorial
(mb, yk, il)

Source: Smashing Magazine, Design At Scale: One Year With Figma

Monthly Web Development Update 4/2019: Design Ethics And Clarity Over Style

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 4/2019: Design Ethics And Clarity Over Style

Monthly Web Development Update 4/2019: Design Ethics And Clarity Over Style

Monthly Web Development Update 4/2019: Design Ethics And Clarity Over Style

Anselm Hannemann



‘Ethics’ and Ethics” is more than a typical article. It’s a detailed essay exploring what the term ‘ethics’ really means, how its meaning changed in recent times, and how diffusion of responsibility makes it hard to address and fix problems in big organizations.

Implementing design ethics, tech ethics, or business ethics as individual responsibilities might seem like a quick and easy solution, however, it’s not a very effective one as they all lack context when they don’t have support from other people who provide the foundation for their work. Only if a company’s business analysts, bookkeepers, investors, PR, marketing and sales people, as well as the CEO themself all contribute to building an ethical product, it will become successful. And because such an undertaking requires so many people to be on board, it’s rarely seen out in the wild.

How much effort and good will it takes to build a company that follows an ethical approach, is illustrated in the book It doesn’t have to be crazy at work by Basecamp’s Jason Fried and David Heinemeier Hansson. It helps us understand why it’s so much easier to build a non-ethical company and why even if a couple of people in a team strive for better ethics, the product or company won’t reflect this individual path yet. To help us do better, Oliver Reichenstein, the author of the article which I mentioned in the beginning, also shares a very interesting approach: designers might want to start to consider philosophy to encourage ethical thinking and advocate for values and ethics in their everyday work.

News

  • We heard the announcement some months ago, and now the first builds are out: Microsoft’s new Chromium-based Edge browser is here. So what does that mean for front-end developers?
  • Safari 12.1 is included in macOS Mojave 10.14.4 and iOS 12.2 and with it, all users get Dark Mode for the Web, Intelligent Tracking Prevention 2.1, Payment Request API improvements, WebRTC improvements, Intersection Observer support, Web Share API, color input, and <datalist> support.
  • Firefox 66 has been released. From now on, autoplaying sounds will be blocked by default, and the Touch Bar on macOS is supported, too.

General

  • Humane by Design is an essential resource about focusing your design decisions on user well-being. It provides ideas and helpful resources about transparency, empowering people, fostering inclusiveness, showing respect, and making thoughtful decisions. A valuable website for any project or product you are in charge of or work on.
  • What does it look like when you try to use a browser just ten years old today? The browser’s start page returns a 404 error, Google works but returns a no-script version — not deliberately but because JavaScript errors out —, YouTube doesn’t work without even showing an error, and creating accounts at any of the big services isn’t a pleasant experience at all. This article is not only enlightening but also reveals a lot of details we can take care of to make our web projects more sustainable and available for more people, regardless of which browser they use.

UI/UX

Eight pitfalls that complicate a design

When designing a product, there are quite some pitfalls that can make a simple task overly complicated. Taras Bakusevych summarized what you need to watch out for. (Image credit)

Web Performance

  • Chrome 73 now supports imagesrcset and imagesizes attributes on link rel=preload elements.

Tooling

  • Spectral is a flexible open-source JSON linter with out-of-the-box support for the OpenAPI specification and, as such, promotes consistency in API designs.

Privacy

  • This article explains why all your Alexa traffic is analyzed and probably listened to by someone who works on the product software to help the voice-activated assistant respond to commands.

JavaScript

Illustration of a fire extingisher extinguishing matches

JavaScript can be a performance liability and, thus, we need to use it responsibly. Jeremy Wagner shares tips for doing so. (Image credit)

Work & Life

Going Beyond…

  • It’s impossible to reach everyone, even for giants like Google or Facebook. Seth Godin on why reaching almost no one is fine if you ask yourself which no one. Your smallest viable audience holds you to account. It forces a focus and gives you nowhere to hide.
  • Google Maps is one of the products most of us use daily. And yet it’s the one product by Google that doesn’t make money — yet. As it seems, its makers are now trying to include ads into Maps to bring more profits to Alphabet.
  • George Monbiot wrote an article on “how the media let malicious idiots take over”. To make it clearer, let me quote the following paragraph:

    “If our politics is becoming less rational, crueller and more divisive, this rule of public life is partly to blame: the more disgracefully you behave, the bigger the platform the media will give you. If you are caught lying, cheating, boasting or behaving like an idiot, you’ll be flooded with invitations to appear on current affairs programmes. If you play straight, don’t expect the phone to ring.”

  • Mallen Baker explains why the environment is too important to leave to environmentalists and what we can do to make our lives worth living and create a surrounding that works for humans.
  • It’s hard to believe in theory, but in practice, it’s very easy to make unethical choices, deliberately. This article asks what drives people to make unethical choices. Sticking to your own ethical beliefs is hard when society is embracing something different — at least that’s what we tend to see from the filtered media and social media bubble. However, if we talk to people directly, I think it’s different. Many people share the same ethical beliefs and would condemn unethical behavior if it would be transparent and public. The private/unseen is what makes it easier for people to go beyond their values, supercharged by the allurement of money or getting a higher status and reputation in society.
Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 4/2019: Design Ethics And Clarity Over Style

Art Direction For The Web Using CSS Shapes

dreamt up by webguru in Uncategorized | Comments Off on Art Direction For The Web Using CSS Shapes

Art Direction For The Web Using CSS Shapes

Art Direction For The Web Using CSS Shapes

Andrew Clarke



Last year, Rachel Andrew wrote an article that took a new look at CSS Shapes in which she reintroduced readers to the basics of using CSS Shapes. For anyone keen to understand how to use properties like shape-outside, shape-margin, and shape-image-threshold, Rachel’s is the ideal primer.

I’ve seen many examples of using the properties, but very few go beyond Basic Shapes, including circle(), ellipse(), inset(). Even examples using polygon() shapes rarely go far beyond them. Considering the creative opportunities CSS Shapes offer, this is disappointing. But, I’m sure that with a little inspiration and imagination, we can make more distinctive and engaging designs. So, I’m going to show you how to use CSS Shapes to create the following five different types of layout:

  1. V-shapes
  2. Z-patterns
  3. Curved shapes
  4. Diagonal shapes
  5. Rotated shapes

A Little Inspiration

Sadly, you won’t find many inspiring examples of websites which use CSS Shapes. That doesn’t mean that inspiration isn’t out there — you just have to look a little further afield at advertising, magazine, and poster design. However, it would be foolish for us to merely mimic work from a previous era and medium.

You can find inspiration in surprising places, such as these vintage advertisements.

You can find inspiration in surprising places, such as these vintage advertisements.

For the past few years, I’ve filled Dropbox folders with inspiration and I really ought to move those examples to Pinterest. Fortunately, Kristopher Van Sant has been more diligent than me in compiling a Pinterest board full of inspiring ‘Shapes Of Text’ examples.

Shapes add energy to design, and this movement draws people in. They help to connect an audience with your story and make tighter connections between your visual and written content.

When you need content to flow around a shape, use the shape-outside property. You must float an element left or right for shape-outside to have any effect.

  img {
  float: <values>;
  shape-outside: <values>;
}

NB: When you flow content around shapes, be careful not to allow any lines of text to become too narrow and fit only one or two words.

It often needs surprisingly little markup to develop dynamic and original layouts. My HTML for this series of five designs consists only of a header and main elements, figures, and images, and is often no more complicated than this:

  <header role="banner">
  <h1>Mini Cooper</h1>
</header>

<figure>
  <img src="mini.png" alt="Mini Cooper">
</figure>

<main>
…
</main>

1. V-Shapes

For me, one of the most incredible aspects of modern-day CSS is that I can create a shape from the alpha channel of a partially transparent image with no need to draw a polygon path. I only need to create an image, and then a browser will take care of the rest.

I think this is one of the most exciting additions to CSS and it makes developing art direction for the web more straightforward, especially if you work with a content management system and dynamically generated content.

Left: Without CSS Shapes, this design feels dull and lifeless. Right: Creating v-shapes makes this design more distinctive and engaging.

Left: Without CSS Shapes, this design feels dull and lifeless. Right: Creating v-shapes makes this design more distinctive and engaging.

To develop shapes from images, they must have an alpha channel which is either entirely or partially transparent. I needn’t draw a polygon to enable content to flow between the triangular shapes either side of my content in this first design; instead, I need only specify the URL of an image file as the shape-outside value:

  [src*="shape-left"],
[src*="shape-right"] {
  width: 50%;
  height: 100%;
}

[src*="shape-left"] {
  float: left;
  shape-outside: url('alpha-left.png');
}

[src*="shape-right"] {
  float: right;
  shape-outside: url('alpha-right.png');
}

CSS Shape Example

Watch out for CORS (cross-origin resource sharing) when using images to develop your shapes. You must host images on the same domain as your product or website. If you use a CDN, make sure it sends the correct headers to enable shapes. It’s also worth noting that the only way to test shapes locally is to use a web server. The file:// protocol simply won’t work.

Generated Content

As Rachel explained in her article:

“You could also use an image as the path for the shape to create a curved text effect without also including the image on the page. You still need something to float, however, and so for this, we can use Generated Content.”

As an alternative to alpha channel, I can use Generated Content — applied to two pseudo-elements — one for a polygon triangle on the left, the other for the right. My running text will now flow between the two generated shapes:

  main::before {
  content: "";
  display: block;
  float: left;
  width: 50%;
  height: 100%;
  shape-outside: polygon(0 0, 0 100%, 100% 100%);
}

main p:first-child::before {
  content: "";
  display: block;
  float: right;
  width: 50%;
  height: 100%;
  shape-outside: polygon(100% 0, 0 100%, 100% 100%);
}

NB: Bennett Feely’s CSS clip-path maker is a fabulous tool for working out coordinate values for use with CSS Shapes.

Adjusting the width of alpha images at several breakpoints allows the shape of this running text to best suit its viewport.

Adjusting the width of alpha images at several breakpoints allows the shape of this running text to best suit its viewport.

2. Z-Patterns

A Z-pattern is a familiar path our eyes follow when reading content from left–right, top–bottom. By placing content along the hidden lines which form a Z, these patterns help guide a reader along this path, from where we’d like them to start reading towards a destination such as a call-to-action. Z-patterns can be either discreet — implied by focal points or elements with higher visual weight — or made obvious using CSS Shapes.

The z-pattern created by driving a narrow column of running text between two shapes suggests speed and the fun people will have when driving this iconic little car.

The z-pattern created by driving a narrow column of running text between two shapes suggests speed and the fun people will have when driving this iconic little car.

In this next design, a discreet z-pattern is formed as:

  1. The large image spans the full page width, the end-point emphasised with a right aligned headline.
  2. A block of running text is formed by two CSS Shapes.
  3. The thick top border on a figure acting as a footer completes the Z.

There’s no need for complicated markup to implement this design and my simple HTML includes just three elements:

  <header role="banner">
  <h1>Mini Cooper:icon of the ’60s</h1>
  <img src="banner.png" alt="Mini Cooper">
</header>

<main>
  <img src="placeholder-left.png" alt="" aria-hidden="true">
  <img src="placeholder-right.png" alt="" aria-hidden="true">
  …
</main>

<figure role="contentinfo">
…
</figure>

My page-width spanning header and figure needs no explanation, but flowing text between two triangular shapes is a little more complicated. To implement this z-pattern design, I choose to include two tiny 1×1px placeholder images onto which I apply two larger, shape-forming images using shape-outside. By attaching an aria-hidden attribute to these images, a screen reader won’t describe them.

After giving the two shape images the same dimensions,
I float one image left and the other to the right, which allows my running text to run between them:

  [src*="placeholder-left"],
[src*="placeholder-right"] {
  display: block;
  width: 240px;
  height: 100%;
}

[src*="placeholder-left"] {
  float: left;
  shape-outside: url('shape-right.png');
}

[src*="placeholder-right"] {
  float: right;
  shape-outside: url('shape-right.png');
}

Left: A presentable but predictable design which lacks energy. Right: CSS Shapes suggest fun and speed.

Left: A presentable but predictable design which lacks energy. Right: CSS Shapes suggest fun and speed.

The iconic Mini Cooper was fast and fun to drive. While my design would be perfectly presentable without a z-pattern formed by CSS Shapes, this layout looks predictable and lacks energy. The z-pattern created by driving a narrow column of running text between two shapes suggests speed and the fun people will have when driving this iconic little car.

3. Curved Shapes

One of the most fascinating aspects of CSS Shapes is how I can create elegant shapes using the alpha channel from a partially transparent image. This shape can be anything I imagine. I only need to create the image, and a browser will flow content around it.

While confining content to within a shape has been proposed in the CSS Shapes Module Level 2 specification, there’s currently no way to know if and when this might be implemented in browsers. But while shape-inside isn’t available (yet!), that doesn’t mean I can’t create similar results using shape-outside.

Left: Another presentable, but predictable design. Right: A more distinctive look created by using CSS Shapes.

Left: Another presentable, but predictable design. Right: A more distinctive look created by using CSS Shapes.

By confining my content within a curved image floated right, I can easily add a distinctive look to this next design. To create the shape, I again use the shape-outside property, this time with the value being the same URL as my visible image:

[src*="curve"] {
  float: right;
  width: 400px;
  height: 100vh;
  shape-outside: url('curve.png');
}

To put some distance between my shape and the content flowing around it, the shape-margin property draws a further shape outside the contours of the first one. I can use any CSS absolute length unit — millimeters, centimeters, inches, picas, pixels, and points — or relative units (ch, em, ex, rem, vh, and vw):

[src*="curve"] {
  shape-margin: 3rem;
}

More Margins

Adding movement to this curved design relies on more than CSS Shapes. Using viewport width units, I give my headline, image, and running text a different left margin, each one proportional to the width of the viewport. This creates a diagonal from the back of my headline to the front of the car:

  h1 {
  margin-left: 5vw;
}

img {
  margin-left: 10vw;
}

p {
  margin-left: 20vw;
}

4. Diagonal Shapes

Angles can help make layouts look less structured and feel more organic. The absence of clear structure encourages the eye to roam freely around a composition. This movement also causes an arrangement to feel energetic.

I see designs set around horizontal and vertical axes every day, but rarely anything based on a diagonal. Every once in a while, I spot an angled element — perhaps a banner graphic with its bottom sloping — but it’s rarely essential to a design.

It’s common to see content flowing around shapes in print design, but this was impossible to achieve on the web before CSS Shapes.

It’s common to see content flowing around shapes in print design, but this was impossible to achieve on the web before CSS Shapes.

Even though CSS Grid involves setting columns and rows, there’s no reason why it can’t be used to create dynamic diagonals. This next design needs just a header and main element:

  <header role="banner">
  <h1>Mini Cooper</h1>
  <img src="banner.png" alt="Mini Cooper">
</header>

<main>
  <img src="shape.png" alt="">
  …
</main>

To create the diagonal detail in this design, I again flow content around a shape image which itself is floated left. Again I use the shape-outside property with the same URL value as my visible image and a shape-margin to put distance between my shape and the content flowing around it:

  [src*="shape"] {
  float: left;
  shape-outside: url('shape.png');
  shape-margin: 3rem;
}

Given that responsiveness is one of the web’s intrinsic properties, we can rarely predict how content will flow, but we avoid designs like this one. When there’s too little space for all my running text to fit above the shape, the fact that each shape is floated means that content will flow into space below the shape.

5. Rotated Shapes

Why settle for just using CSS Grid and Shapes when adding Transforms to the mix can enable layouts which were unimaginable only a few years ago? In this final example, flowing text around the cars in this image, while at the same time rotating the entire composition needed a combination of all those properties.

Why settle for using only CSS Grid and Shapes?

Why settle for using only CSS Grid and Shapes?

As the image of these cars has no transparent alpha channel, flowing text around the shapes, it contains needs a second image which includes only alpha channel information.

Implementing this design requires two images; one visible, the other proving alpha channel information.

Implementing this design requires two images; one visible, the other proving alpha channel information.

This time, I float the visible image right and apply the shape-outside property with a URL value which matches my alpha channel image:

  [src*="shape"] {
  float: right;
  width: 50%;
  shape-outside: url('alpha.png');
  shape-margin: 1rem;
}

You may have noticed that both my images contain elements which I rotated clockwise by ten degrees. With those images in place, I can rotate the entire layout ten degrees in the opposite direction to give the illusion that my images are upright:

  body {
  transform: rotate(-10deg);
}

I rotate this layout just enough to make the design more appealing, without sacrificing readability.

I rotate this layout just enough to make the design more appealing, without sacrificing readability.

Bonus Example: Polygon Shapes Sculpt Columns

An extract from ‘Art Direction for the Web’ available from 26th March 2019.

You can create strong, structural shapes with nothing more than type. Combining polygon() shapes and pseudo-elements, you can sculpt shapes from solid blocks of running text, in the style of Alexey Brodovitch and his influential work for Harper’s Bazaar.

Left: These beautiful numerals are almost too lovely to hide. They’re also perfect for carving into those columns. Right: When I use invisible pseudo-elements with no background or borders to develop polygon shapes, the result is two unusually shaped columns.

Left: These beautiful numerals are almost too lovely to hide. They’re also perfect for carving into those columns. Right: When I use invisible pseudo-elements with no background or borders to develop polygon shapes, the result is two unusually shaped columns.

I formed these columns from two article elements, i.e. with a gutter between them and a maximum width, which help maintain a comfortable measure:

  body {
  display: grid;
  grid-template-columns: 1fr 1fr;
  grid-gap: 2vw;
  max-width: 48em;
}

Because there are two article elements and I also specified
two columns for my grid, there’s no need to be specific about the position of those articles. I can let a browser place them for me, and all that remains for me is to apply shape-outside to a generated pseudo-element in each column:

  article:nth-of-type(1) p:nth-of-type(1)::before {
  content: "";
  float: left;
  width: 160px;
  height: 320px;
  shape-outside: polygon(0px 0px, 90px 0px, [...]);
}

article:nth-of-type(2) p:nth-of-type(2)::before {
  content: "";
  float: right;
  width: 160px;
  height: 320px;
  shape-outside: polygon(20px 220px, 120px 0px, [...]);
}

The Pay-Off

Now that Firefox has released a version which supports CSS Shapes, and has launched a Shape Path Editor inside its Developer Tools, there’s now only Edge without support for Shapes. This situation will soon change now that Microsoft has announced a change from their own EdgeHTML rendering engine to Chromium’s Blink, the same engine as Chrome and Opera.

Tools like CSS Shapes now give us countless opportunities to use art direction to capture readers’ attention and keep them engaged. I hope by now you’re as excited about them as I am!

Editorial’s Note: Andy’s new book, Art Direction for the Web (pre-order your copy today), explores 100 years of art direction and how we can use this knowledge and the newest web technologies to create better digital products. Read an excerpt chapter to get a taste of what the book has to offer.

Further Resources

Smashing Editorial
(mr, ip, il)

Source: Smashing Magazine, Art Direction For The Web Using CSS Shapes

Privacy UX: Better Cookie Consent Experiences

dreamt up by webguru in Uncategorized | Comments Off on Privacy UX: Better Cookie Consent Experiences

Privacy UX: Better Cookie Consent Experiences

Privacy UX: Better Cookie Consent Experiences

Vitaly Friedman



With the advent of the EU General Data Protection Regulation (GDPR) in May 2018, the web has turned into a vast exhibition of consent pop-ups, notifications, toolbars, and modals. While the intent of most cookie-related prompts is the same — to get a user’s consent to keep collecting and evaluating their behavior the same ol’ way they’ve been doing for years — implementations differ significantly, often making it ridiculously difficult or simply impossible for customers to opt out from tracking.

On top of that, many implementations don’t even respect users’ decisions anyway and set cookies despite their choices, assuming that most people will grant consent regardless.

Admittedly, they aren’t entirely wrong. In our research, the vast majority of users willingly provide consent without reading the cookie notice at all. The reason is obvious and understandable: many customers expect that a website “probably wouldn’t work, or the content wouldn’t be accessible otherwise.” Of course, that’s not necessarily true, but users can’t know for sure unless they try it out. In reality, though, nobody wants to play ping-pong with the cookie consent prompt, and so they click the consent away by choosing the most obvious option: “OK.”

Note: It’s important to understand that cookies and consent mechanisms discussed in this article go beyond GDPR. In Europe, they are addressed by a separate piece of legislation, the ePrivacy Directive, which is currently in draft for a revamp (as of April 2019). It may be finalized by summer 2019. We do not know what its final form will take, but it will determine the future of cookie consent prompts.

Now, with this common behavior online, what might come across is that cookie prompts aren’t particularly useful, and that’s partly true. But they certainly helped raise awareness about privacy and data collection on the web. In fact, users now know that websites track their data, which they weren’t aware of a few years ago. But they often see it as a necessary evil in exchange for accessing the content “for free.”

It’s not that users always happily share their personal data, but they don’t really feel that revoking consent is a viable alternative. To many of them, the only reasonable option is to provide consent while opting in for an ad-blocker extension or any other tracking blocker in their browser.


oreo website
Cookie consent pop-ups don’t have to be tiny and unreadable. We could try to integrate them into our overall experience too. Case in point: Oreo, with an Oreo’s cookie displayed as a cookie consent prompt. (Large preview)

Since cookie consent prompts always stand in the way of the content, they are often dismissed almost instinctively, not unlike carousels during onboarding. Hence, from the designer’s perspective, spending weeks refining that one-of-a-kind prompt might be a waste of time. (Sorry for crushing your dreams at this point.)


Iamsterdam.com cookies
Iamsterdam.com allows visitors to adjust cookie settings and explains the different types of cookies, with an option to easily opt out from certain ones. (Large preview)

Since many websites heavily rely on collecting data, running A/B tests, and serving users with targeted advertising, often the design of the cookie consent notice is heavily influenced by business requirements and business goals. Is it acceptable for the business to allow users to quickly dismiss all tracking? Which cookies are (apparently) required for the site to work, and which ones are optional? Which cookies should be selected for approval by default, and which ones would require a manual opt-in? Should the customer be able to easily revoke the consent once it’s provided, and if so, how exactly would it be done if they don’t have an account on the site?

These business decisions have a major impact on design decisions, although from the user’s side, the optimal design would be quite obvious: no cookie consent at all. That would mean, for example, that users could define privacy settings in their browsers, and choose what cookies they’d like to provide consent to. The browser would then send a hint to each website a user chooses to visit and automatically opt-in or opt-out cookie settings, depending on the provided preferences.


Users now know that websites track their data, which they weren’t aware of a few years ago. But they often see it as a necessary evil in exchange for accessing the content “for free.”

In fact, a Do Not Track (DNT) header is already implemented and widely supported by browsers (although it was removed from Safari to prevent potential use for fingerprinting), yet there is no established mechanism for transforming this preference into a granular selection of accepted cookie groups. It shouldn’t be very surprising that most advertisers wouldn’t be particularly happy about this pattern gaining traction either, but perhaps it could be a slightly better way forward, as preferred by users, to no cookie consent at all.

Admittedly, users sometimes find a way around anyway. Some users who already use an ad-blocker are using a cookie prompt blocker as well. The latter, however, usually grants full consent on user’s behalf by default. Obviously, it goes against the very purpose of the cookie prompt in the first place, and ideally such extensions would automatically opt in only for essential cookies while opting out for everything else (if it’s possible at all).

As designers, though, we have a legal obligation to explain what happens to a user’s data and how it will be stored within the mandate of provided business requirements. As Geoffrey Keating mentioned in his article, “The Cookie Law and UX,” focusing specifically on legislation in Ireland, according to the Office of the Data Protection Commissioner, “the minimum requirement is clear communication to the user as to what he/she is being asked to consent to in terms of cookies usage and a means of giving or refusing consent.”

It’s worth noting that consent has to be “unambiguous” and “freely given,” as it must “leave no doubt as to the data subject’s intention, should be an active indication of the user’s wishes and can only be valid if the data subject is able to exercise a real choice.” Hence, silent, pre-ticked checkboxes or inactivity shouldn’t constitute consent.

This might sound obvious, but some solutions explore the uncharted legal territory that’s left for interpretation. For instance, sometimes the website visitor “automatically submits a cookie consent by clicking a link on the website”, and sometimes you can choose which actions are “obvious enough” for you to perceive them as a silent consent. Obviously, this isn’t an informed decision and such technique rightly belongs to the realm of mischievous culprits that should be avoided at all costs.

With this in mind, there are a couple of options we could consider:

Avoid Optional Cookies And Keep Only Functional Ones: No Prompt Required

It might appear that every single website needs to display a cookie consent notice to their European visitors, but if your website doesn’t collect, track, and evaluate any personal data from users, or it collects only anonymous data, you may not need one. In fact, one of the fundamental principles of the Privacy by Design framework is that non-essential cookies should be off by default and the user needs to actively opt in.

Now, cookies might be required for maintaining the logged-in state or user preferences, for example, and according to EU regulations you don’t need explicit consent for that. That’s also why many prompts have functional cookies enabled by default, without an option to disable them. And some sites, like GOV.UK, merely inform users about cookies, not requiring any input at all, but also not providing a choice to opt out from the optional Google Analytics cookie.


gov.uk
GOV.UK stores at least 18 cookies, and many of them are required for better experience; e.g. to track progress when applying for a licence, multivariate testing, making secure connections to websites, YouTube videos, email subscription updates. However, only one cookie is optional: Google Analytics (anonymized). (Large preview)

Not every website can get away without ad-related third-party cookies, though. One seemingly light way out would be to add a plain notification such as “By using our website, you are consenting to our use of cookies.” But this alone isn’t enough. As we need an active indication of the user’s consent, we have to require some sort of unambiguous action. For that reason, some sites add a “close” icon, making the consent box appear as a notification that can be dismissed. To ensure a more obvious acknowledgement, it’s a good idea to replace the “close” icon with a button. In many implementations, the button would simply say “Close” or “Save” or “Continue,” although “Accept and continue” is more clear.

In most cases, the notification doesn’t disappear until it is acted upon, hence being the very first thing that users see when visiting any page on the website. Do you need user consent on every page, though? You could be more selective and ask for the cookie consent only when it’s actually required; for example, when the user is setting up an account or wants to save their settings.


twitter website
Twitter informs its visitors about the cookies used, but there is no option to adjust cookie preferences. The prompt will be clicked away by clicking on the ‘×’ in the top-right corner. Beware, however: some people see it as consent, while others see it as postponing the decision for a later time, similar to snoozing notifications. (Large preview)

resharper
JetBrains chose to display a plain text-only prompt as if it were in a terminal window. There are balanced options to opt in or opt out of cookie consent. (Large preview)

Allowing Users To Adjust Privacy Settings

While the previous option dictates complete obedience or complete lock-out, you could be more empathetic to the user’s intent. The user might have strong feelings about the exposure of their personal data, so providing a way out — not dissimilar to the personal questions we mentioned earlier — could keep them on the site. To achieve that, we could add an option to change settings, followed by an overview of different groups of cookies, with some of them being required for the site to function flawlessly, and others being optional.

The grouping could relate to the purpose of cookies such as advertising, analytics and statistics, or testing. It could also be much broader, allowing users to choose between “I am OK with personalized ads” or “I do not want to see personalized ads”. It’s also a good idea to explain to the user which features on the site will be unavailable once a certain group of cookies is blocked. TrustArc does that with a slider, allowing for a number of privacy levels, from allowing only required cookies to functional cookies to advertising cookies, while also showing its impact on the overall functionality of the site.


nngroup
On Nielsen Norman group, all cookies are grouped. Necessary cookies can’t be opted out of, but all other groups can be dismissed with a few taps. Ideally, there’d be a way to opt out from all optional ones at once. (Large preview)

mailchimp
Tabbed groups for cookies on MailChimp. (Large preview)

the-guardian 'your privacy' page
(Large preview)

the-guardian 'your privacy' page
The Guardian provides clear options which have equal weight on the ‘Your privacy’ page (not in the initial prompt, though). The modal on the front page is large but it provides clear options. (Large preview)

the daily mash privacy settings
The Daily Mash provides an option to adjust privacy settings (albeit de-emphasized), and in the settings panel visitors can easily reject or accept all cookies. Also, ‘Save & Exit’ is a very clear label for a button at the bottom. The experience might not be delightful, but it’s pretty clear. (Large preview)

myfitnespal cookies settings
MyFitnessPal, powered by TrustArc, displays a slider which explains which functionality is allowed or disallowed with every group of cookies. (Large preview)

cookies settings
A fantastic pattern on how a cookie settings dashboard could be designed to improve transparency. A mock-up by Matthias Ott and Joschi Kuphal built at the last @indiewebcamp in Dusseldorf. You can also download the Adobe XD source file, kindly provided by the authors. (Large preview)

In terms of layout, the prompt could be subtle and hardly noticeable, or obvious and difficult to ignore. We could place it in the header of the page, or at the bottom of the viewport, or we could also position it in the center of the page as a modal. All of these options could be floating and persistent as the user scrolls the page, thereby blocking access to a portion of the content (or entire content) until consent is granted.

De Telegraaf, for example, places a verbose cookie consent in the middle of the page, blurring out the content underneath, literally hijacking the page and standing in the way. It shouldn’t be a big revelation that from all the options we’ve tested, this one seems to be the most annoying one to users. In general, subtle prompts should be preferred, and the less space they need to be displayed, the better the overall user’s reaction has been.


telegraaf cookie consent prompt
It’s becoming common to blur out the content while displaying a cookie consent prompt. This technique brings focus to the prompt, but it also makes visitors more likely to click it away, often trading privacy for access to content. A slightly more subtle prompt would be more effective and helpful as users might browse around and leave without committing. Example: Telegraaf.nl. (Large preview)

pathe cookie consent prompt
Blurring out the content is rarely a good idea. Also, there’s a very clear primary action and secondary action in play here. It doesn’t look like a fair choice between the two options. Example: Pathe.nl. (Large preview)

empire-bio cookie consent
Can you spot the cookie? A little notification in the bottom-right corner asks for the user’s consent on Empirebio.dk. (Large preview)

hoistgroup cookie
On HoistGroup, there is no way to dismiss cookies — they are required, and there is no way out. (Large preview)

zeitonline cookie
On Zeit Online, there is no way to dismiss cookies — they are required, and there is no way out. (Large preview)

ticketveiling cookie prompt
Ticketveiling.nl displays a cookie prompt at the bottom, but also a chat widget. Only one interviewed person expected that the chat was here to provide help with the cookie settings. It’s probably a good idea to not display the widget alongside the cookie display prompt, though. (Large preview)

Appearance And Wording On Buttons

We also need to give some thought to the appearance of the consent form, especially to the design of buttons and the wording on those buttons. Wording such as “Just proceed” or “Save and Exit” or “Continue using the site” nudges users to move on with a default option, and in fact, many users are likely to do just that. It’s more respectful to have two buttons, a primary one for granting consent, and a secondary one for adjusting settings, with both buttons having neutral microcopy such as “Accept” and “Reject,” or “Okay” and “No, thanks.” That’s the path we’ve chosen with Smashing Magazine.


smashing cookie options
Accept or dismiss. It shouldn’t be harder than that. (Large preview)

nhs options for cookies
NHS provides very clear options to accept cookies or ‘Turn cookies off’. (Large preview)

nhs cookie settings
Large radio buttons to select what kind of cookies the visitor accepts. A fantastic example of privacy settings designed in an honest and transparent way. The only missing bit is to opt out from all optional cookies by default. Example: NHS. (Large preview)

fandom cookie choice
Accept or reject with a single tap, although both options don’t have the same weight. Example: Fandom.com. (Large preview)

It shouldn’t be surprising that of all the options, users feel surprisingly pleased and appreciative of the option to reject all cookies with a single click on a button. Some users were surprised that this option was even provided, and while a majority chose to grant consent, every fifth user refused consent. By doing so, they assumed the website would be fully functional without cookies, and rightfully so.

Not many users would consider revoking or adjusting cookie settings after granting consent, but when asked to do so, they expected to find the options in the header or the footer of the website, either in the privacy policy or in the cookie policy. It’s not very surprising, and of course that’s where we need to place the options to adjust settings should the user wish to do so.


jamie-oliver cookie preferences
(Large preview)

jamie-oliver cookie preferences
By default, all options on the Jamie Oliver website are off, with an option to turn them on. However, the main call-to-action button is ‘Accept recommended settings’, which turns all toggles to on. The option to proceed without cookies is positioned at the very bottom of the page, with a subtle ‘Close’. It’s probably not obvious enough, and another button on the top would be more helpful. (Large preview)

Users Understand When They Are Being Tricked

So far, the entire experience should be quite straightforward, right? Well, if a business model heavily relies on collecting and tracking data, you might be forced into shady areas when selecting anything but the easiest option is confusing and generates a lot of work. In our interviews, users could easily see through the companies’ agendas, even saying something along the lines of “Ah, I see what you did there.” Some things were less obvious than others, though.

Whenever the cookie consent suggested an option to review cookies or adjust cookie settings, users expected to see an overview of all cookies and be able to adjust which cookies should be allowed to be set and which not. In terms of interface design, usually it’s done with tabbed sections within a cookie consent widget, with some groups selected by default. It’s common to see functional cookies, analytics cookies, advertising cookies, and website settings cookies. This level of granular control isn’t often expected but it’s considered to be helpful and friendly, and as such preferred — however, only if the entire category of cookies could be unselected at once, with a single tap on a single checkbox.

Oddly enough, some implementations go to extremes, providing users with an overwhelming overview of every single cookie set by third parties. It’s not uncommon for all of them to be opted in by default, and opting out requires a tap on every single one of them, one by one. It might not seem like a big deal with five cookies, but it’s a monstrosity with over 250 cookies generously provided by dozens of trackers on the site. In such cases, many users gave up after a few opt-outs, providing full access to their data and moving on.


kayak cookie prompt
What would you expect to happen when clicking on the ‘close’ icon? How is a click on a button different from a click on the cross? Is there a difference? On Kayak, users were confused during the test. (Large preview)

Unfortunately, that’s just scratching the surface. Imagine a cookie settings prompt with a “Close ×” button. What behavior would you expect from clicking “Close”? Would you expect the prompt to be dismissed and then eventually reappear? Or would you expect all tracking scripts to be opted out by default? Opted in? Unsurprisingly, most users weren’t even thinking that far — they just wanted the pop-up to disappear. Nobody expected the trackers to be opted out by default, yet many users felt that it was a “temporary thing” that would show up again “at some point.” In practice, almost all of the time, closing the prompt was perceived by website owners as user consent and, in fact, all cookies were stored to their full extent. That’s not necessarily what the user was expecting.


speisekarte privacy policy
A slightly confusing experience on Speisekarte.de, with an option to accept or ‘deselect all’ options in the top-left corner, in the same color, with a delimiter > in between. Also, the wording ‘Agree and Proceed’ or ‘Learn more’ isn’t obvious. (Large preview)

The wording on buttons and links caused major confusion for users. On Speisekarte.de, you can either “Agree and Proceed” (primary, large green button) or “Learn more” (subtle grey link, not even underlined). What would you expect to appear after clicking “Learn More”? While many users expect a privacy policy to appear, the action actually prompts the management of cookies, with 405 ad selection, delivery, and reporting partners, 446 information storage and access partners, 274 content selection, delivery, and reporting partners, 372 measurement partners and 355 personalization partners. “Agree and Proceed” grants access to the storage and evaluation of customers’ personal data for 1,852 partners. That’s not too shabby, isn’t it?

It’s not obvious that the listing area for all those partners is scrollable at all, and there is no obvious way to unselect all of them. Would you find it enjoyable to manually opt out 1,852 toggles, one by one? Probably not. As it turns out, you can “deselect all” partners in the top-left corner of the window — however, this option is conveniently presented in a way that resembles breadcrumb navigation rather than a button. And, of course, all partners are opted in by default. That’s deceptive, dishonest, and disrespectful.

Once you get into the habit of rejecting cookies by default, you can find a number of shady and questionable practices that seem to be widespread. Sometimes companies place Facebook and Twitter tracking cookies in the “Necessary” category. Sometimes the option to reject cookies is conveniently hidden behind an extra “Manage” option. And sometimes there is an option to opt out from analytics, but a user is automatically opted in for “Anonymous analytics.”

Some companies go beyond that, inventing new ways of making business should the user wish to avoid tracking. Sometimes it shows up with a “Premium EU Subscription” without on-site advertising and tracking scripts, and sometimes with a website being unavailable, or an “EU experience” (which, frankly, is much faster and lightweight than its non-EU counterpart). Not a single person accessing the site appreciated either of these options. That shouldn’t be a big revelation, but there is a significant amount of users who, being confronted with such treatment, have nothing left but to leave the site in search for alternatives.


usa-today EU experience
USA Today provides an EU experience which happens to be much cleaner and faster than the original one. For EU visitors, the ‘site does not collect personally identifiable information or persistent identifiers from, deliver a personalized experience to, or otherwise track or monitor persons reasonably identified as visiting our Site from the European Union.’ (Large preview)

the-washington-post Premium EU Subscription
The Washington Post has introduced a ‘Premium EU Subscription’ that doesn’t contain any on-site advertising or third-party ad tracking. (Large preview)

los-angeles-times' message stating that the website was unavailable in most European countries
For a while, Los Angeles Times displayed a message stating that the website was unavailable in most European countries. (Large preview)

Guidelines And Strategies For Better Design

According to the EU regulation, each cookie, its provider, purpose, expiry date, and type should be explained and elaborated in detail in a privacy policy, and many services such as TrustArc, IAB Consent Framework, Cookiebot, OneTrust, Cookie Consent, and many others provide this feature out of the box. They also provide an option to customize which groups of cookies should be presented as choices to the user, and while the default experience is decent, often it can be used to make it unnecessarily difficult for the customer to adjust their settings.


sourceforge privacy settings
(Large preview)

sourceforge privacy settings
(Large preview)

sourceforge privacy settings
Clear options, honest experience, transparent interface. A great example on Sourceforge. (Large preview)

At the end of the day, we need to provide good experiences while also achieving our business goals. We can do that with a series of steps:

  1. We need to audit and group all cookies required on the site;
  2. We need to decide how each of these groups would be labelled, which ones would be required, and which would be optional;
  3. We need to understand what impact disabling a group of cookies would have on the functionality of the site, and communicate each choice to the user;
  4. Last but not least, we need to decide what settings should be selected by default, and what customization options we want to present to the user.

The simplest design pattern seems to be obvious. If you need user consent, display a narrow notification notice in the header or at the bottom of the viewport. No need to blur out or darken the content to make the notification noticeable; just make sure it doesn’t blend in with the rest of the site. If possible, allow users to accept or reject cookies with two obvious buttons: “Okay” and “No, thanks.” Otherwise, provide an option to adjust settings, following an overview of cookie categories. There, you have to make obvious the consequences each choice has on the website functionality, and enable users to “Approve all” or “Reject all” cookies at once — for the entire site, and for each category.

Where to place the notification notice? The position doesn’t seem to really matter — it didn’t make any difference in decision-making. The overlay covering half of the page, however, was perceived as the most annoying option — and it shouldn’t be too surprising as it literally blocks a large portion of the content on the page. Most users almost instinctively know what they are presented with, and they also know what action they’d prefer to take to get on with the site, so lengthy explanations are ignored or dismissed as swiftly as push notifications or permission requests.

In the next article of the series, we’ll look into notifications UX and permission requests, and how we can design the experience around them better — and with user’s privacy in mind.

A kind thank you to Heather Burns for reviewing this article before publication.

Smashing Editorial
(yk, il)

Source: Smashing Magazine, Privacy UX: Better Cookie Consent Experiences

Understanding Subresource Integrity

dreamt up by webguru in Uncategorized | Comments Off on Understanding Subresource Integrity

Understanding Subresource Integrity

Understanding Subresource Integrity

Drew McLellan



If you’ve ever used a CDN-hosted version of a JavaScript library, you may have noticed a strange looking integrity attribute on the script tag. This attribute contains seemingly endless alphanumeric junk that you may be tempted to strip out in the quest for cleaner code.

All that junk is actually a really useful security feature called Subresource Integrity (SRI) that can help to defend your site against certain types of hacks and compromises. In this article, we’ll take a look at what SRI is, how it can help protect you, and how you can start using it in your own projects, not just for files hosted on CDNs.

A Bit Of History

Way back in the days when JavaScript was very much the poorer cousin to HTML and CSS, we didn’t need to think too much about how our scripts could be used as an attack vector for our websites. Most sites were all hosted on a single physical server somewhere on our own hosting infrastructure, and it was the server we thought about defending when it came to security best practices.

As browsers became more capable and net connections got fatter, we started to use more and more JavaScript, and eventually, reusable JavaScript libraries began to spring up. In those early days, libraries like script.aculo.us, Prototype and eventually jQuery began to gain adoption amongst developers looking to add more interactivity into their pages.

With these added libraries and subsequent plugins came added page weight, and before long we were starting to think seriously about front-end performance. Resources like Content Delivery Networks (CDNs) that had previously been the reserve of giant corporations were becoming commonplace for everyday folk building snappy websites.

Along the way, some bright spark noticed that sites were all requesting their own copies of common libraries — things like the latest jQuery — and if there was a common CDN version of those libraries that could be used by every site, then the user wouldn’t need to keep downloading the same file. They’d take the hit for the first site to use the file, but then it would sit in their local browser cache and downloads could be skipped for each subsequent site. Genius!

This is why you’ll see CDN links for your favorite libraries using URLs like jsdelivr.com — they’re making use of a common CDN to host the files so that their users see the performance benefits.

What Could Go Wrong?

This remains a good, practical way to work, but it does introduce a potential vector for attack. Let’s imagine that it’s 2012 and everyone is using the brand new jQuery 1.8. Back with the traditional way of doing things, everyone would have their own jQuery 1.8 file hosted as part of their own website on their own server.

If you were some kind of evil actor — like some sort of jQuery-based Hamburglar — and had figured out a sneaky way to hack the library for your own evil gains, you’d have to target every website individually and compromise their servers to have any impact. That’s a lot of effort.

But that’s not how things are now, as everyone is using jQuery loaded from a common CDN. And when I say everyone, I don’t mean hundreds of web pages. I mean millions of web pages. Suddenly that one file has become a very attractive target for our shady hacker. If they can compromise that one file, they can very quickly have code running in millions of web pages across the globe.

It doesn’t matter what that code is. It could be a prank to deface pages, it could be code to steal your passwords, it could be code to mine cryptocurrency, or it could be sneaky trackers to follow you around the web and make a marketing profile. The important thing is that the innocent file that the developer added to a page has been changed and you now have some evil JavaScript running as part of your site. That’s a big problem.

Enter Subresource Integrity

Rather than rolling back the clocks and abandoning a useful way to use code, SRI is a solution that adds a simple level of security on top. What SRI and the integrity attribute does is make sure that the file you linked into a page never changes. And if it does change, then the browser will reject it.

Checking that code hasn’t changed is a very old problem in computer science and thankfully it has some very well established solutions. SRI does a good job of adopting the simplest — file hashing.

File hashing is the process of taking a file and running it through an algorithm that reduces it to a short string representation, known as a hash or checksum. Without getting into the weeds, the process is either repeatable or reversible, so much that if you were to give someone else a file along with the hash they’d be able to run the same algorithm to check that the two match. If the file changes or the hash changes, then there’s no longer a match and you know something is wrong and should distrust the file.

When using SRI, your webpage holds the hash and the server (CDN or anywhere) holds the file. The browser downloads the file, then quickly computes to make sure that it is a match with the hash in the integrity attribute. If it matches the file is used, and if not it is blocked.

Trying It Out

If I go to getbootstrap.com today to get a CDN link to a version of Bootstrap, I’m given a tag that looks like this:

You can see that the src attribute is as we’re used to, and the integrity attribute holds what we now know to be a hash.

The hash is actually in two parts. The first is a prefix to declare which hashing algorithm to use. In this case, it’s sha384. This is followed by a dash and then the hash itself, encoded with base64.

You may be familiar with base64 as a way of encoding inline files like images into pages. It’s not a cryptographic process — it’s just a fast and convenient way to encode potentially messy data in a way that translates neatly to ASCII. This is why it’s used a lot on the web.

On seeing this, the browser will download bootstrap.min.js. Before executing it, it will base64 decode the hash and then use the sha384 hashing algorithm to confirm that the hash matches the file it’s just downloaded. If it matches, the file is executed.

I can test this out by putting that tag in a page, and then flipping to the Network tab in my browser tools to see that the file has been loaded.


Network tab
(Large preview)

I can see that bootstrap.min.js (and also the jQuery file it needs) have loaded successfully.

Let’s see what would happen if I update the hash to be something I know to be incorrect.


hash
(Large preview)

As you can see, the hash that’s specified in my page no longer matches the file, so the file gets blocked.

Using SRI In Your Own Projects

Having this capability for libraries on a CDN is great, and if you see the option to use an embedded file with an integrity attribute then you should definitely favor that option. But it’s not limited to big projects on CDNs, you can use this yourself for your own sites.

It’s not at all far fetched to imagine a scenario where a hacker manages to get access to just a few files on your site. I think most of us have see a client, colleague or friend who has at some point had a WordPress site compromised with a load of nasty junk that they didn’t even realise was there.

SRI can protect you from this too. If you generate integrity hashes for your own files, then you can have your site reject any changes just as it would for a remotely hosted file.

Generating Hashes

You can, as you’d expect, run some commands at your computer’s terminal to generate a hash for a file. This example of how to do so comes from the MDN Subresource Integrity page:

cat FILENAME.js | openssl dgst -sha384 -binary | openssl base64 -A  

That’s getting the content of FILENAME.js and passing it as input to openssl to create a digest using sha384, which is then passed as input into another openssl command to base64 encode the result. Not only is that complicated and obscure, but it’s also not the sort of thing you want to be doing by hand every time your JavaScript file changes.

More usefully, you’ll want to integrate this somehow into your site’s build process, and as you’d imagine, there are plenty of ready-made options there. The exact implementation is going to vary wildly based on your project, but here are some building blocks.

If you use Gulp to build your sites, there’s gulp-sri which will output a JSON file with a list of your files and their hashes. You can then make use of this in your site. For example, for a dynamically rendered site, you might create a template plugin to read that file and add the hashes to your templates where needed.

If you’re still with Gulp but have a static site (or a statically generated site) you might use gulp-sri-hash which will actually run through your HTML pages and modify the pages to add hashes where needed, which is very handy.

If you’re using Webpack, there’s webpage-subresource-integrity which in true Webpack style is more complex than any human might expect it to be, but does appear to work.

For those using the Handlebars templating engine, there appear to be options available to you, and if your build process is just basic JavaScript, there are simple solutions there too.

If you’re using a CMS like WordPress, I found a plugin that appears to make it easy, although I’ve not tried it myself. Googling for your own platform of choice with SRI or Sub Resource Integrity will likely point you in the right direction.

You essentially want to hook your hashing in after your JavaScript files have been minified and then make that hash available in some way to whatever part of your system outputs the <script> tags. One of the wonders of the web platform is that it’s so technically diverse, but that sadly leaves me unable to give you good implementation instructions!

Other Things To Note

In this article, I’ve talked a lot about JavaScript files because that’s really where it makes the most sense to defend against hacking attacks. SRI also works with CSS, and so you can use it in exactly the same way there. The risk for malicious CSS is much lower, but the potential to deface a site still exists and who knows what browser bugs could also lead to CSS inadvertently exposing your site to a hacker. So it’s work using SRI there too.

Another interesting thing you can do is use a Content Security Policy to specify that any script (or styles) on your page must use SRI, and of course that SRI must validate.

Content-Security-Policy: require-sri-for script;  

This is a way to ensure that SRI is always used, which could be useful on sites worked on by multiple team members who may or may not be fully up to speed with how to do things. Again, a good place to read more about this is the always-great MDN docs for Subresource Integrity.

The last thing that’s worth talking about is browser support for SRI. Support in modern browsers is broad, with the main exception being Internet Explorer. Due to the backwards-compatible way the specification has been implemented, however, it’s safe to use immediately. Browsers that understand the integrity attribute will use the hash and check integrity, and older browsers will just carry on as they always have and keep working. Of course, you’ll not get the added protection in those older browsers, but you will in the browsers that do offer support.

Conclusion

We’ve seen not only what those weird hashes in the integrity attributes do, but how we can use them to defend against certain types of attacks on our website. Of course, there’s no one silver bullet that will defend our sites against every type of exploit, but Subresource Integrity is a really useful tool in the chain.

Exploiting a security flaw is often about getting multiple small pieces to line up. If A is in place, and you can make B happen, then a bug in C makes D possible. Browser features like SRI give us a good way to tie things down just a little bit more and potentially break that chain and prevent a hacker from getting what they want. What’s more, if you can integrate it into your build process or CMS, it’s something you should be able to set up once and then forget about and it won’t cause you day to day inconvenience.

As such, I’d really recommend taking a serious look at Subresource Integrity and implementing it on your sites if you can.

Smashing Editorial
(yk, il)

Source: Smashing Magazine, Understanding Subresource Integrity

Collective #506

dreamt up by webguru in Uncategorized | Comments Off on Collective #506



C506_draw

World Draw

An AI Experiment to draw the world together using the same technology behind QuickDraw and AutoDraw. Read more about it in this tweet by Active Theory.

Check it out





C506_watercss

Water.css

Water.css is a just-add-css collection of styles to make simple websites look nicer.

Check it out


C506_elgrapho

El Grapho

El Grapho is a high performance WebGL graph data visualization engine that can support millions of interactive nodes and edges in any modern browser.

Check it out














C506_hyper

Hyper Editor

Hyper Editor is a block based, backend independent content editor that is intended to be integrated with any Content Management System (CMS) or Framework.

Check it out


Collective #506 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #506

Digging Into The Display Property: The Two Values Of Display

dreamt up by webguru in Uncategorized | Comments Off on Digging Into The Display Property: The Two Values Of Display

Digging Into The Display Property: The Two Values Of Display

Digging Into The Display Property: The Two Values Of Display

Rachel Andrew



A flex or grid layout starts out with you declaring display: flex or display: grid. These layout methods are values of the CSS display property. We tend not to talk about this property on its own very much, instead concentrating on the values of flex or grid, however, there are some interesting things to understand about display and how it is defined that will make your life much easier as you use CSS for layout.

In this article, the first in a short series, I’m going to take a look at the way that the values of display are defined in the Level 3 specification. This is a change to how we defined display in earlier versions of CSS. While it may seem unusual at first for those of us who have been doing CSS for many years, I think these changes really help to explain what is going on when we change the value of display on an element.

Block And Inline Elements

One of the first things we teach people who are new to CSS are the concepts of block and inline elements. We will explain that some elements on the page are display: block and they have certain features because of this. They stretch out in the inline direction, taking up as much space as is available to them. They break onto a new line; we can give them width, height, margin as well as padding, and these properties will push other elements on the page away from them.

We also know that some elements are display: inline. Inline elements are like words in a sentence; they don’t break onto a new line, but instead reserve a character of white space between them. If you add margins and padding, this will display but it won’t push other elements away.

The behavior of block and inline elements is fundamental to CSS and the fact that a properly marked up HTML document will be readable by default. This layout is referred to as “Block and Inline Layout” or “Normal Flow” because this is the way that elements lay themselves out if we don’t do anything else to them.

Inner And Outer Values Of display

We understand block and inline elements, but what happens if we make an item display: grid? Is this something completely different? If we look at a component on which we have specified display: grid, in terms of the parent element in the layout it behaves like a block level element. The element will stretch out and take up as much space in the inline dimension as is available, it will start on a new line. It behaves just like a block element in terms of how it behaves alongside the rest of the layout. We haven’t said display: block though, or have we?

It turns out that we have. In Level 3 of the Display specification, the value of display is defined as two keywords. These keywords define the outer value of display, which will be inline or block and therefore define how the element behaves in the layout alongside other elements. They also define the inner value of the element — or how the direct children of that element behave.

This means that when you say display: grid, what you are really saying is display: block grid. You are asking for a block level grid container. An element that will have all of the block attributes — you can give it height and width, margin and padding, and it will stretch to fill the container. The children of that container, however, have been given the inner value of grid so they become grid items. How those grid items behave is defined in the CSS Grid Specification: the display spec gives us a way to tell the browser that this is the layout method we want to use.

I think that this way of thinking about display is incredibly helpful; it directly explains what we are doing with various layout methods. If you were to specify display: inline flex, what would you expect? Hopefully, a box that behaves as an inline element, with children that are flex items.

There are a few other things neatly explained by thinking about display in this new way, and I’ll take a look at some of these in the rest of this article.

We Are Always Going Back To Normal Flow

When thinking about these inner and outer display properties, it can be helpful to consider what happens if we don’t mess around with the value of display at all. If you write some HTML and view it in a browser, what you get is Block and Inline Layout, or Normal Flow. The elements display as block or inline elements.

See the Pen Block and Inline Layout by Rachel Andrew.

See on Codepen

The example below contains some markup that I have turned into a media object, by making the div display: flex (the two direct children) now become flex items, so the image is now in a row with the content. If you see in the content, however, there is a heading and a paragraph which are displaying in normal flow again. The direct children of the media object became flex items; their children return to normal flow unless we change the value of display on the flex item. The flex container itself is a block box, as you can see by the fact the border extends to the edge of its parent.

See the Pen Block and Inline Layout With Flex Component by Rachel Andrew.

See on Codepen

If you work with this process, the fact that elements on your page will lay themselves out with this nice readable normal flow layout, rather than fighting against it and trying to place everything, CSS is much easier. You are also less likely to fall into accessibility issues, as you are working with the document order, which is exactly what a screen reader or a person tabbing through the document is doing.

Explaining flow-root And inline-block

The value of inline-block is also likely to be familiar to many of us who have been doing CSS for a while. This value is a way to get some of the block behavior on an inline element. For example, an inline-block element can have a width and a height. An element with display: inline-block also behaves in an interesting way in that it creates a Block Formatting Content (BFC).

A BFC does some useful things in terms of layout, for example, it contains floats. To read about Block Formatting Contexts in more detail see my previous article “Understanding CSS Layout And The Block Formatting Context.” Therefore saying display: inline-block gives you an inline box which also establishes a BFC.

As you will discover (if you read the above-mentioned article about the Block Formatting Context), there is a newer value of display which also explicitly creates a BFC. This is the value of flow-root. This value creates a BFC on a block, rather than an inline element.

  • display: inline-block gives you a BFC on an inline box.
  • display: flow-root gives you a BFC on a block box.

You are now probably thinking that is all a bit confusing: why do we have two completely different keywords here, and what happened to the two-value syntax we were talking about before? This leads neatly into the next thing I need to explain about display, i.e. the fact that CSS has a history we need to deal with in terms of the display property.

Legacy Values Of Display

The CSS2 Specification detailed the following values for the display property:

  • inline
  • block
  • inline-block
  • list-item
  • none
  • table
  • inline-table

Also defined were the various table internal properties such as table-cell which we are not dealing with in this article.

We then added to these some values for display, to support flex and grid layout:

  • grid
  • inline-grid
  • flex
  • inline-flex

Note: The specification also defines ruby and inline-ruby to support Ruby Text which you can read about in the Ruby specification.

These are all single values for the display property, defined before the specification was updated to explain CSS Layout in this way. Something very important about CSS is the fact that we don’t go around breaking the web; we can’t simply change things. We can’t suddenly decide that everyone should use this new two-value syntax and therefore every website ever built that used the single value syntax will break unless a developer goes back and fixes it!

While thinking about this problem, you may enjoy this list of mistakes in the design of CSS which are less mistakes in many cases as things that were designed without a crystal ball to see into the future! However, the fact is that we can’t break the web, which is why we have this situation where right now browsers support a set of single values for display, and the specification is moving to two values for display.

The way we get around this is to specify legacy and short values for display, which includes all of these single values. This means that a mapping can be defined between single values and new two keyword values. Which gives us the following table of values:

Single Value Two-Keyword Values Description
block block flow Block box with normal flow inner
flow-root block flow-root Block box defining a BFC
inline inline flow Inline box with normal flow inner
inline-block inline flow-root Inline box defining a BFC
list-item block flow list-item Block box with normal flow inner and additional marker box
flex block flex Block box with inner flex layout
inline-flex inline flex Inline box with inner flex layout
grid block grid Block box with inner grid layout
inline-grid inline grid Inline box with inner grid layout
table block table Block box with inner table layout
inline-table inline table Inline box with inner table layout

To explain how this works, we can think about a grid container. In the two-value world, we would create a block level grid container with:

.container {
    display: block grid;
}

However, the legacy keyword means that the following does the same thing:

.container {
    display: grid;
}

If, instead, we wanted an inline grid container, in the two-value world we would use:

.container {
    display: inline grid;
}

And if using the legacy values:

.container {
    display: inline-grid;
}

We can now go back to where this conversation began and look at display: inline-block. Looking at the table, you can see that this is defined in the two-value world as display: inline flow-root. It now matches display: flow-root which in a two-value world would be display: block flow-root. A little bit of tidying up and clarification of how these things are defined. A refactoring of CSS, if you like.

Browser Support For The Two-Value Syntax

As yet, browsers do not support the two-value syntax for the display property. The implementation bug for Firefox can be found here. Implementation — when it happens — would essentially involve aliasing the legacy values to the two-value versions. It’s likely to be a good while, therefore, before you can actually use these two-value versions in your code. However, that really isn’t the point of this article. Instead, I think that looking at the values of display in the light of the two-value model helps explain much of what is going on.

When you define layout on a box in CSS, you are defining what happens to this box in terms of how it behaves in relation to all of the other boxes in the layout. You are also defining how the children of that box behave. You can think in this way long before you can explicitly declare the values as two separate things, as the legacy keywords map to those values, and it will help you understand what happens when you change the value of display.

Smashing Editorial
(il)

Source: Smashing Magazine, Digging Into The Display Property: The Two Values Of Display

1 2 3 4 5 6 7 8 9 10 ... 54 55   Next Page »