Collective #524

Inspirational Website of the Week: Kubikfoto³ A great design with wonderful details and smooth transitions. Our pick this week. Get inspired This content is sponsored via Syndicate Ads Website Builder Software With Your Branding This B2B platform fits designers, freelancers, agencies and anyone who Read more

Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Anselm Hannemann



Last week I read about the web turning into a dark forest. This made me think, and I’m convinced that there’s hope in the dark forest. Let’s stay positive about how we can contribute to making the web a better place and stick to the principle that each one of us is able to make an impact with small actions. Whether it’s you adding Webmentions, removing tracking scripts from a website, recycling plastic, picking up trash from the street to throw it into a bin, or cycling instead of driving to work for a week, we all can make things better for ourselves and the people around us. We just have to do it.

News

  • Safari went ahead by introducing their new Intelligent Tracking Protection and making it the new default. Now Firefox followed, enabling their Enhanced Tracking Protection by default, too.
  • Chrome 75 brings support for the Web Share API which is already implemented in Safari. Latency on canvas contexts has also been improved.
  • The Safari Technology Preview Release 84 introduced Safari 13 features: warnings for weak passwords, dark mode support for iOS, support for aborting Fetch requests, FIDO2-compliant USB security keys with the Web Authentication standard, support for “Sign In with Apple” (for Safari and WKWebView). The Visual Viewport API, ApplePay in WKWebView, screen sharing via WebRTC, and an API for loading ES6 modules are also supported from now on.
  • There’s an important update to Apple’s AppStore review guidelines that requires developers to offer “Sign In with Apple” in their apps in case they support third-party sign-in once the service is available to the public later this year.
  • Firefox 67 is out now with the Dark Mode CSS media query, WebRender, and side-by-side profiles that allow you to run multiple instances parallelly. Furthermore, enhanced privacy controls are built in against crypto miners and fingerprinting, as well as support for AV1 on Windows, Linux, and macOS for videos, String.prototype.matchAll(), and dynamic imports.

General

  • The web relies on so many open-source projects, and, yet, here’s what it looks like to live off an open-source budget. Most authors are below the poverty line, forced to live in cheaper countries or not able to make a living at all from their public service of providing reliable, open software for others who then use it commercially.
  • We all know that annoying client who ignores your knowledge and gets creative on their own. As a developer, Holger Bartel experienced it dozens of times; now he found himself in the same position, having ordered a fine drink and then messed it up.

UI/UX

  • With so many dark patterns built into the software and websites we use daily, Fabricio Teixeira and Caio Braga call for a tech diet for users.

Facebook, Instagram, Twitter, and Netflix Nutrition Facts.

“Dark patterns try to manipulate users to engage further, deeper, or longer on a site or app. The world needs a tech diet, and designers can help make it a reality. (Image credit)

CSS

  • The CSS feature for truncating multi-line text has been implemented in Firefox. -webkit-line-clamp: 3;, for example, will truncate text at the end of line three.

Security

Privacy

  • Anil Dash tries to find an answer to the question if we can trust a company in 2019.
  • Kevin Litman-Navarro analyzed over 150 privacy policies and shares his findings in a visual story. Not only does it take about 15 minutes on average to read a privacy policy, but most of them require a college degree or even professional career to understand them.
  • Our view on privacy hasn’t changed much since the 18th century, but the circumstances are different today: Companies have a wild appetite to store more and more data about more people in a central place — data that was once exclusively accessible by state authorities. We should redefine what privacy, personal data, and consent are, as Maciej Cegłowski argues in “The new wilderness.”
  • The people at WebKit are very active when it comes to developing clever solutions to protect users without compromising too much on usability and keeping the interests of publishers and vendors in mind at the same time. Now they introduced “privacy preserving ad click attribution for the web,” a technique that limits the data which is sent to third parties while still providing useful attribution metrics to advertisers.

An overview of how hard privacy policies are to read and how much time it requires to do so. Most privacy policies are college and professional career level. Only one is comprehensible on a Middle School level.

Most privacy policies on the web are harder to read than Stephen Hawking’s “A Brief History Of Time,” as Kevin Litman-Navarro found out by examining 150 privacy policies. (Image credit)

Accessibility

  • Brad Frost describes a great way to reduce motion on websites (of animated GIFs, for example), using the picture element and its media query feature.

Tooling

  • The IP Geolocation API is an open-source real-time IP to Geolocation JSON API with detailed countries data integration that is based on the Maxmind Geolite2 database.
  • Pascal Landau wrote a step-by-step tutorial on how to build a Docker development setup for PHP projects, and yes, it contains everything you might need to apply it to your own projects.

Work & Life

  • Roman Imankulov from Doist shares insights into decision-making in a flat organization.
  • As a society, we’re overworked, have too many belongings, yet crave for more, and companies only exist to grow indefinitely. This is how we kick-started climate change in the past century and this is how we got more people than ever into burn-outs, depressions, and various other health issues, including work-related suicides. Philipp Frey has a bold theory that breaks with our current system: A research by Nässén and Larsson suggests that a 1% decrease in working hours could lead to a 0.8% decrease in GHG emissions. Taking it further, the paper suggests that working 12 hours a week would allow us to easily achieve climate goals, if we’re also changing the economy to not entirely focus on growth anymore. An interesting study as it explores new ways of working, living, and consuming.
  • Leo Babauta shares a method that helps you acknowledge when you’re tired. It’s hard to accept, but we are humans and not machines, so there are times when we feel tired and our batteries are low. The best way to recover is realizing that this is happening and focusing on it to regain some energy.
  • Many of us are trying to achieve some minutes or hours of “deep work” a day. Fadeke Adegbuyi’s “The Complete Guide to Deep Work” shares valuable tips to master it.

Going Beyond…

  • People who live a “zero waste” life are often seen as extreme, but this is only one point of view. Here’s the other side where one of the “extreme” persons reminds us that it used to be normal to go to a farmer’s market to buy things that aren’t packed in plastic, to ride a bike, and drink water from a public fountain. Instead, our consumerism has become quite extreme and needs to change if we want to survive and stay healthy.
  • Sweden wants to become climate neutral by 2045, and now they presented an interesting visualization of the plan. It’s designed to help policymakers identify and fill in gaps to ensure that the goal will be achieved. The visualization is open to the public, so anyone can hold the government accountable.
  • Everybody loves them, many have them: AirPods. However, they are an environmental disaster, as this article shows.
  • The North Face tricking Wikipedia is advertising’s dark side.
  • The New York Times published a guide which helps us understand our impact on climate change based on the food we eat. This is not about going vegan but how changing eating habits can make a difference, both to the environment and our own health.
Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Collective #524

dreamt up by webguru in Uncategorized | Comments Off on Collective #524





C524_layoutapi

The Layout Instability API

Philip Walton introduces the new Layout Instability API to help web developers detect when unexpected layout shifts are happening to their users.

Read it


C524_rachel

Hello Subgrid!

Rachel Andrew introduces subgrid at the CSSconf EU 2019, with use cases, example code and some thoughts on where we might see Grid going in the future.

Watch it







C524_doodle

doodle-place

Doodle-place automatically animates your doodle and places it in an explorable 3D world of user doodles. Made by Lingdong Huang.

Check it out


C524_huijing

Using DevTools To Understand Modern CSS Layouts

Chen Hui Jing explains a variety of modern CSS layout techniques through live demonstrations via DevTools, and provides real-world use cases of how such techniques allow designs to better adapt across a broad range of viewports.

Watch it




C524_versionmuseum

Version Museum

In case you didn’t know about it: Version Museum showcases the visual history of popular websites and games that have shaped our lives.

Check it out


C524_pika

Pika CDN

A new CDN that was built to serve the 60,000+ npm packages written in ES Module (ESM) syntax meaning you can import packages directly without worrying about their module format or file location.

Check it out








Collective #524 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #524

Inspired Design Decisions: Avaunt Magazine

dreamt up by webguru in Uncategorized | Comments Off on Inspired Design Decisions: Avaunt Magazine

Inspired Design Decisions: Avaunt Magazine

Inspired Design Decisions: Avaunt Magazine

Andrew Clarke



I hate to admit it, but five or six years ago my interest in web design started to wane. Of course, owning a business meant I had to keep working, but staying motivated and offering my best thinking to clients became a daily struggle.

Looking at the web didn’t improve my motivation. Web design had stagnated, predictability had replaced creativity, and ideas seemed less important than data.

The reasons why I’d enjoyed working on the web no longer seemed relevant. Design had lost its joyfulness. Complex sets of build tools and processors had even supplanted the simple pleasure of writing HTML and CSS.

When I began working with the legendary newspaper and magazine designer Mark Porter, I became fascinated by art direction and editorial design. As someone who hadn’t studied either at art school, everything about this area of design was exciting and new. I read all the books about influential art directors I could find and started collecting magazines from the places I visited around the world.

The more inspired I became by mag- azine design, the faster my enthusiasm for web design bounced back. I wondered why many web designers think that print design is old-fashioned and irrelevant to their work. I thought about why so little of what makes print design special is being transferred to the web.

My goal became to hunt for inspiring examples of editorial design, study what makes them unique, and find ways to adapt what I’d learned to create more compelling, engaging, and imaginative designs for the web.

My bookcases are now chock full of magazine design inspiration, but my collection is still growing. I have limited space, so I’m picky about what I pick up. I buy a varied selection, and rarely collect more than one issue of the same title.

I look for exciting page layouts, inspiring typography, and innovative ways to combine text with images. When a magazine has plenty of interesting design elements, I buy it. However, if a magazine includes only a few pieces of inspiration, I admit I’m not above photographing them before putting it back on the shelf.

I buy new magazines as regularly as I can, and a week before Christmas, a few friends and I met in London. No trip to the “Smoke” is complete without a stop at Magma, and I bought several new magazines. After I explained my inspiration addition, one friend suggested I write about why I find magazine design so inspiring and how magazines influence my work.

Avaunt magazine cover

Avaunt magazine. (Large preview)

That conversation sparked the idea for a series on making inspired design decisions. Every month, I’ll choose a publication, discuss what makes its design distinctive, and how we might learn lessons which will help us to do better work for the web.

As an enthusiastic user of HTML and CSS, I’ll also explain how to implement new ideas using the latest technologies; CSS Grid, Flexbox, and Shapes.

I’m happy to tell you that I’m inspired and motivated again to design for the web and I hope this series can inspire you too.

Andy Clarke
April 2019

Avaunt Magazine: Documenting The Extraordinary

Andy Clarke reading Avaunt magazine

What struck me most about Avaunt was the way its art director has colour, layout, and type in diverse ways while maintaining a consistent feel throughout the magazine. (Large preview)

One look at me is going to tell you I’m not much of an adventurer. I don’t consider myself particularly cultured and my wife jokes regularly about what she says is my lack of style.

So what drew me to Avaunt magazine and its coverage of “adventure,” “culture,” and “style” when there are so many competing and varied magazines?

It often takes only a few page turns for me to decide if a magazine offers the inspiration I look for. Something needs to stand out in those first few seconds for me to look closer, be it an exciting page layout, an inspiring typographic treatment, or an innovative combination of images with text.

Avaunt has all of those, but what struck me most was the way its art director has used colour, layout, and type in diverse ways while maintaining a consistent feel throughout the magazine. There are distinctive design threads which run through Avaunt’s pages. The bold uses of a stencil serif and geometric sans-serif typeface are particularly striking, as is the repetition of black, white, a red which Avaunt’s designers use in a variety of ways. Many of Avaunt’s creative choices are as adventurous as the stories it tells.

Avaunt magazine spread

© Avaunt magazine. (Large preview)

Plenty of magazines devote their first few spreads to glossy advertising, and Avaunt does the same. Flipping past those ads finds Avaunt’s contents page and its fascinating four-column modular grid.

This layout keeps content ordered within spacial zones but maintains energy by making each zone a different size. This layout could be adapted to many kinds of online content and should be easy to implement using CSS Grid. I’m itching to try that out.

For sans-serif headlines, standfirsts, and other type elements which pack a punch, Avaunt uses MFred, designed initially for Elephant magazine by Matt Willey in 2011. Matt went on to art direct the launch of Avaunt and commissioned a stencil serif typeface for the magazine’s bold headlines and distinctive numerals.

Avaunt Stencil was designed in 2014 by London based studio A2-TYPE who have since made it available to license. There are many stencil fonts available, but it can be tricky to find one which combines boldness and elegance — looking for a stencil serif hosted on Google Fonts? Stardos would be a good choice for display size type. Need something more unique, try Caslon Stencil from URW.

Avaunt’s use of a modular grid doesn’t end with the contents page and is the basis for a spread on Moscow’s Polytechnic Museum of Cold War curiosities which first drew me to the magazine. This spread uses a three-column modular grid and spacial zones of various sizes.

What fascinates me about this Cold War spread is how modules in the verso page combine to form a single column for text content. Even with this column, the proportions of the module grid still inform the position and size of the elements inside.

 Avaunt magazine spread

© Avaunt magazine. (Large preview)

While the design of many of Avaunt’s pages is devoted to a fabulous reading experience, many others push the grid and pull their foundation typography styles in different directions. White text on dark backgrounds, brightly coloured spreads where objects are cut away to blend with the background. Giant initial caps which fill the width of a column, and large stencilled drop caps which dominate the page.

Avaunt’s playful designs add interest, and the arrangement of pages creates a rhythm which I very rarely see online. These variations in design are held together by the consistent use of Antwerp — also designed by A2-TYPE — as a typeface for running text, and a black, white, and red colour theme which runs throughout the magazine.

Avaunt magazine spread

© Avaunt magazine. (Large preview)

Studying the design of Avaunt magazine can teach and inspire. How a modular grid can help structure content in creative ways without it feeling static. (I’ll teach you more about modular grids later.)

How a well-defined set of styles can become the foundation for distinctive and diverse designs, and finally how creating a rhythm throughout a series of pages can help readers stay engaged.

Next time you’re passing your nearest magazine store, pop in a pick up a copy of Avaunt Magazine. It’s about adventure, but I bet it can help inspire your designs to be more adventurous too.

Say Hello To Skinny Columns

For what feels like an eternity, there’s been very little innovation in grid design for the web. I had hoped the challenges of responsive design would result in creative approaches to layout, but sadly the opposite seems to be true.

Three examples using skinny columns

Top: Squeezing an image into one column reduces its visual weight and upsets the balance of my composition. Middle: Making the image fill two standard columns also upsets that delicate balance. Bottom: Splitting the final column, then add- ing half its width to another, creates the perfect space for my image, and a more pleasing overall result. (Large preview)

Instead of original grid designs, one, two, three, or four block arrangements of content became the norm. Framework grids, like those included with Bootstrap, remain the starting point for many designers, whether they use those frameworks or not.

It’s true that there’s more to why so much of the web looks the same as web designers using the same grid. After all, there have been similar conventions for magazines and newspapers for decades, but somehow magazines haven’t lost their personality in the way many websites have.

I’m always searching for layout inspiration and magazines are a rich source. Reading Avaunt reminded me of a technique I came across years ago but hadn’t tried. This technique adds one extra narrow column to an otherwise conventional column grid. In print design, this narrow column is often referred to as a “bastard column or measure” and describes a block of content which doesn’t conform to the rest of a grid. (This being a family friendly publication, I’ll call it a “skinny column.”)

In these first examples, squeezing an image into one column reduces its visual weight and upsets the balance of my composition. Making the image fill two standard columns also upsets that delicate balance.

Splitting the final column, then adding half its width to another, creates the perfect space for my image, and a more pleasing overall result.

Example using a skinny column

(Large preview)

I might use a skinny column to inform the width of design elements. This Mini Cooper logo matches the width of my skinny column, and its size feels balanced with the rest of my composition.

Example using a skinny column

(Large preview)

Sometimes, content won’t fit into a single column comfortably, but by combining the widths of standard and skinny columns, I create more space and a better measure for running text. I can place a skinny column anywhere within a layout to wherever I need my content.

Example using a skinny column

(Large preview)

An empty skinny column adds whitespace which allows the eye to roam around a design. The asymmetry created by placing a skinny column between two standard columns also makes a structured layout feel more dynamic and energetic.

Example using a skinny column

(Large preview)

Another empty skinny column carves a wide gutter into this design and confines my running text to a single column, so that its height reflects the image’s vertical format. I can also use a skinny column to turn typographic elements into exciting design elements.

Developing With Skinny Columns

Designs like these are surprisingly simple to implement using today’s CSS. I need just four structural elements; a logo, header, figure, plus an article to contain my running text:

<body>
<img src="logo.svg" alt="Mini Cooper"> 
<header>…</header> 
<figure>…</figure> 
<article>…</article>
</body>

I start with a design for medium size screens by applying CSS Grid to the body element within my first media query. At this size, I don’t need a skinny column so instead am developing a symmetrical three-column grid which expands evenly to fill the viewport width:

@media screen and (min-width : 48em) { 
body {
display: grid;
grid-template-columns: 1fr 1fr 1fr; 
grid-column-gap: 2vw; }
}

I use line numbers to place items into my grid columns and rows. I place my logo into the skinny first column and the first row using a substring matching attribute selector. This targets an image with “logo” anywhere in its source value:

[src*="logo"] {
grid-column: 1;
grid-row: 1; }

Next, I place the header element — containing my headline and standfirst paragraph — into the second row. Using line numbers from 1 to -1 spreads this header across all columns:

header {
grid-column: 1 / -1;
grid-row: 2; }

With display:grid; applied, all direct descendants of a grid-container become grid-items, which I can place using areas, line numbers, or names.

This design includes a figure element with a large Mini image, plus caption text about the original Cooper model design. This figure is crucial because it describes a relationship between the img and figcaption. However, while this figure makes my markup more meaningful, I lose the ability to place the img and figcaption using CSS Grid, because by neither are direct descendants of the body where I defined my grid.

Luckily, there’s a CSS property which — when used thoughtfully — can overcome this problem. I don’t need to style the figure, I only need to style its img and figcaption. By applying display:contents; to my figure, I effectively remove it from the DOM for styling purposes, so its descendants take its place:

figure {
display: contents; }

It’s worth noting that although display effectively removes my figure from the DOM for styling purposes, any properties which inherit — including font sizes and styles — are still inherited:

figure img { 
grid-column: 2/-1; grid-row: 4; }

figcaption {
grid-column: 1;
grid-row: 4;
align-self: end; }

I place my article and style its text over three columns using Multi-column Layout, one of my favourite CSS properties:

article {
grid-column: 1 / -1;
grid-row: 3;
column-count: 3;
column-gap: 2vw; }

It’s time to implement a design which includes a skinny column for larger screens. I use a media query to isolate these new styles, then I create a five-column grid which starts with a 1fr wide skinny column:

@media screen and (min-width : 64em) {
body {
grid-template-columns: 1fr repeat(4, 2fr); } 
}

Example using a skinny column

(Large preview)

Then I add values to reposition the header, img and figcaption, and article, remembering to reset the column-count to match its new width:

header { 
grid-column: 3/-1; 
grid-row: 1; }

figure img {
grid-column: 4 / -1;
grid-row: 2; }

figcaption {
grid-column: 2;
grid-row: 2;
align-self: start; }

article {
grid-column: 3;
grid-row: 2;
column-count: 1; }

Radically changing how a design looks using CSS alone, without making changes to the structure of HTML makes me smile, even after almost two decades. I also smile when I change the composition to create a dramatically different layout without changing the grid. For this alternative design, I’m aiming for a more structured look.

To improve the readability of my running text, I divide into three columns. Then, to separate this block of content from other text elements, I place my skinny column between the first two standard columns. There’s no need to change the structure of my HTML. All I need are minor changes to grid values in my stylesheet. This time, my 1fr wide skinny column value comes between two standard column widths:

@media screen and (min-width : 64em) {
body {
grid-template-columns: 2fr 1fr 2fr 2fr 2fr; } 
}

I place my header in the second row and the article in a row directly underneath:

header {
grid-column: 3 / -1;
grid-row: 2; }

article {
grid-column: 3 / -1;
grid-row: 3;
column-count: 3;
column-gap: 2vw; }

Because neither the img and figcaption are direct descendants of the body element where I defined my grid, to place them I need the display:contents; property again:

figure {
display: contents; }

figure img { 
grid-column: 3/5; 
grid-row: 1; }

figcaption { 
grid-column: 5/-1; 
grid-row: 1; 
align-self: start; }

Depending on how you use them, introducing skinny columns into your designs can make content more readable or transform a static layout into one which feels dynamic and energetic.

Skinny columns are just one example of learning a technique from print and using it to improve a design for the web. I was itching to try out skinny columns and I haven’t been disappointed.

Example using a skinny column

(Large preview)

Adding a skinny column to your designs can often be an inspired decision. It gives you extra flexibility and can transform an otherwise static layout into one which is filled with energy.

Designing Modular Grids

Avaunt magazine contains plenty of inspiring layouts, but I want to focus on two pages in particular. This spread contains ‘Cold War Design’ objects within a design which could be applied to a wide variety of content types.

Avaunt magazine spread

‘Cold War Design’ from Avaunt magazine issue 7, January 2019. (Large preview)

On first glance, modular grids can look complicated, however, they’re easy to work with. It surprises me that so few web designers use them. I want to change that.

When you use modular grids thoughtfully, they can fill your designs with energy. They’re excellent for bringing order to large amounts of varied content, but can also create visually appealing layouts when there’s very little content.

For this design — inspired by Avaunt — I base my modular grid on six symmetrical columns and four evenly spaced rows. Grid modules define the placement and size of my content.

Example using a modular grid

Modular grid design inspired by Avaunt. (Large preview)

I bind several modules together to create spacial zones for large images and running text in a single-column on the left. Boundary lines help emphasise the visual structure of the page.

This type of layout might seem complicated at first, but in reality, it’s beautifully simple to implement. Instead of basing my markup on the visual layout, I start by using the most appropriate elements to describe my content. In the past, I would’ve used table elements to implement modular grids. Fast forward a few years, and divisions replaced those table cells. Incredibly, now I need just two structural HTML elements to accomplish this design; one article followed by an ordered list:

<body> 
<article>…</article> 
<ol>…</ol>
</body>

That article contains a headline, paragraphs, and a
table for tabular information:

<article>
<p class="standfirst">…</p> 
<h1>…</h1>
<p>…</p>
<table>…</table>
</article>

The modular grid of Mini blueprints is the most complicated visual aspect of my design, but the markup which describes it is simple. The blueprints are in date order, so an ordered list seems the most appropriate HTML element:

<ol class="items">
<li>
<h2>1969</h2>
<img src="front.svg" alt="">
</li> 
<li>
<h2>1969</h2>
<img src="back.svg" alt="">
</li>
</ol>

My HTML weighs in at only 2Kb and is under sixty lines. It’s a good idea to validate that markup as a little time spent validating early will save plenty more time on debugging CSS later. I also open the unstyled page in a browser as I always need to ensure my content’s accessible without a stylesheet.

I begin developing a symmetrical three-column layout for medium size screens by isolating grid styles using a media query:

@media screen and (min-width : 48em) { 
body {
display: grid;
grid-template-columns: 1fr 1fr 1fr; 
grid-column-gap: 2vw; }
}

The article and ordered list are the only direct descendants of body, but it’s what they contain which I want to place as grid-items. I use display:contents; to enable me to place their contents anywhere on my grid:

article, ol {
display: contents; }

Every element in my article should span all three columns, so I place them using line numbers, starting with the first line (1), and ending with the final line (-1):

.standfirst,
section,
table {
grid-column: 1 / -1; }

The items in my ordered list are evenly placed into the three-column grid grid. However, my design demands that some items span two columns and one spans two rows. nth-of-type selectors are perfect tools for targetting elements without resorting to adding classes into my markup. I use them, the span keyword, and the quantity of columns or rows I want the items to span:

li:nth-of-type(4), 
li:nth-of-type(5), 
li:nth-of-type(6), 
li:nth-of-type(14) { 
grid-column: span 2; }

li:nth-of-type(6) {
grid-row: span 2; }

Example using a modular grid

When elements don’t fit into the available space in a grid, the CSS Grid placement algorithm leaves a module empty. (Large preview)

Previewing my design in a browser, I can see it’s not appearing precisely as planned because some grid modules are empty. By default, the normal flow of any document arranges elements from left to right and top to bottom, just like western languages. When elements don’t fit the space available in a grid, the CSS Grid placement algorithm leaves spaces empty and places elements on the following line.

Example using a modular grid

A browser fills any empty module with the next element which can fit into that space without altering the source order. This may have implications for accessibility. (Large preview)

I can override the algorithm’s default by applying the grid-auto-flow property and a value of dense to my grid-container, in this case the body:

body {
grid-auto-flow: dense; }

Row is the default grid-auto-flow value, but you can also choose column, column dense, and row dense. Use grid-auto-flow wisely because a browser fills any empty module with the next element in the document flow which can fit into that space. This behaviour changes the visual order without altering the source which may have implications for accessibility.

My medium size design now looks just like I’d planned, so now it’s time to adapt it for larger screens. I need to make only minor changes to the grid styles to first turn my article into a sidebar — which spans the full height of my layout — then place specific list items into modules on a larger six-column grid:

@media screen and (min-width : 64em) {
body {
grid-template-columns: 1fr 1fr 1fr 1fr 1fr 1fr; }

article {
grid-column: 1;
grid-row: 1 / -1; }

li:nth-of-type(4) {
grid-column: 5 / span 2; }

li:nth-of-type(5) {
grid-column: 2 / span 2; }

li:nth-of-type(6) {
grid-column: 4 / span 2;
grid-row: 2 / span 2; }

li:nth-of-type(14) { 
grid-column: 5 / 7; } 
}

Keeping track of grid placement lines can sometimes be difficult, but fortunately, CSS Grid offers more than one way to accomplish a modular grid design. Using grid-template-areas is an alternative approach and one I feel doesn’t get enough attention.

Using grid-template-areas, I first define each module by giving it a name, then place elements into either one module or several adjacent modules known as spacial zones. This process may sound complicated, but it’s actually one of the easiest and most obvious ways to use CSS Grid.

I give each element a grid-area value to place it in my grid, starting with the logo and article:

[src*="logo"] {
grid-area: logo; }

article {
grid-area: article; }

Next, I assign a grid-area to each of my list items. I chose a simple i+n value, but could choose any value including words or even letters like a, b, c, or d.

li:nth-of-type(1) {
grid-area: i1; }

…

li:nth-of-type(14) { 
grid-area: i14; }

My grid has six explicit columns, and four implicit rows, the height of which are being defined by the height of the content inside them. I draw my grid in CSS using the grid-template-areas property, where each period (.) represents a grid module:

body {
grid-template-columns: repeat(6, 1fr); 
grid-template-areas:
". . . . ". . . . ".. .. ".. ..
.  ."
.  ."
.  ."
.  ."; }

Then, I place elements into that grid using the grid-area values I defined earlier. If I repeat values across multiple adjacent modules — across either columns or rows — that element expands to create a spacial zone. Leaving a period (.) creates an empty module:

body {
grid-template-columns: repeat(6, 1fr); 
grid-template-areas:
"article logo i1 i2 i3 i3"
"article i4 i4 i5 i5
"article i7 i8 i5 i5
"article i10 i11 i12 i14 i14"; }

In every example so far, I isolated layout styles for different screen sizes using media query breakpoints. This technique has become the standard way of dealing with the complexities of responsive web design. However, there’s a technique for developing responsive modular grids without using media queries. This technique takes advantage of a browser’s ability to reflow content.

Example using a modular grid

I need make only minor changes to my grid styles to adapt this layout for larger screens. The sidebar article on the left and each list item are placed into my grid. (Large preview)

Before going further, it’s worth remembering that my design requires just two structural HTML elements; one ordered list for the content and an article which I transform into a sidebar when there’s enough width available for both elements to stand side-by-side. When there’s insufficient width, those elements stack vertically in the order of the content:

<body> 
<article>…</article> 
<ol>…</ol>
</body>

The ordered list forms the most important part of my design and should always occupy a minimum 60% of a viewport’s width. To ensure this happens, I use a min-width declaration:

ol {
min-width: 60%; }

While I normally recommend using CSS Grid for overall page layout and Flexbox for flexible components, to implement this design I turn that advice on its head.

I make the body element into a flex-container, then ensure my article grows to fill all the horizontal space by using the flex-grow property with a value of 1:

body {
display: flex; }

article {
flex-grow: 1; }

To make sure my ordered list also occupies all available space whenever the two elements stand side-by-side, I give it a ridiculously high flex-grow value of 999:

article {
flex-grow: 999; }

Using flex-basis provides an ideal starting width for the article. By setting the flex container’s wrapping to wrap, I ensure the two elements stack when the list’s minimum width is reached, and there’s insufficient space for them to stand side-by-side:

body {
flex-wrap: wrap; }

article {
flex-basis: 20rem; }

I want to create a flexible modular grid which allows for any number of modules. Rather than specifying the number of columns or rows, using repeat allows a browser to create as many modules as it needs. autofill fills all available space, wrapping the content where necessary. minmax gives each module a minimum and maximum width, in this case 10rem and 1fr:

ol {
grid-template-columns: repeat(auto-fill, minmax(10rem, 1fr)); grid-column-gap: 2vw; }

To avoid modules staying empty, I use grid-auto-flow again with a value of dense. The browser’s algorithm will reflows my content to fill any empty modules:

ol {
grid-auto-flow: dense; }

Just like before, some list items span two columns, and one spans two rows. Again, I use nth-of-type selectors to target specific list items, then grid-column or grid-row with the span keyword followed by the number of columns or rows I want to span:

li:nth-of-type(4), 
li:nth-of-type(5), 
li:nth-of-type(14) { 
grid-column: span 2; }

li:nth-of-type(6) {
grid-column: span 2;
grid-row: span 2; }

This simple CSS creates a responsive design which adapts to its environment without needing multiple media queries or separate sets of styles.

Example using a modular grid

(Large preview)

By using of modern CSS including Grid and Flexbox, relying on browsers’ ability to reflow content, plus a few smart choices on minimum and maximum sizes, this approach comes closest to achieving the goals of a truly responsive web.

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, Inspired Design Decisions: Avaunt Magazine

Bringing A Healthy Code Review Mindset To Your Team

dreamt up by webguru in Uncategorized | Comments Off on Bringing A Healthy Code Review Mindset To Your Team

Bringing A Healthy Code Review Mindset To Your Team

Bringing A Healthy Code Review Mindset To Your Team

Sandrina Pereira



A ‘code review’ is a moment in the development process in which you (as a developer) and your colleagues work together and look for bugs within a recent piece of code before it gets released. In such a moment, you can be either the code author or one of the reviewers.

When doing a code review, you might not be sure of what you are looking for. On the other side, when submitting a code review, you might not know what to wait for. This lack of empathy and wrong expectations between the two sides can trigger unfortunate situations and rush the process until it ends in an unpleasant experience for both sides.

In this article, I’ll share how this outcome can be changed by changing your mindset during a code review:

👩‍💻👨‍💻 Working As A Team

Foster Out A Culture Of Collaboration

Before we start, it’s fundamental to understand the value of why code needs to be reviewed. Knowledge sharing and team cohesion are beneficial to everyone, however, if done with a poor mindset, a code review can be a huge time consumer with unpleasant outcomes.

The team attitude and behavior should embrace the value of a nonjudgmental collaboration, with the common goal of learning and sharing — regardless of someone else’s experience.

Include Code Reviews In Your Estimations

A complete code review takes time. If a change took one week to be made, don’t expect the code review to take less than a day. It just doesn’t work like that. Don’t try to rush a code review nor look at it as a bottleneck.

Code reviews are as important as writing the actual code. As a team, remember to include code reviews in your workflow and set expectations about how long a code review might take, so everyone is synced and confident about their work.

Save Time With Guidelines And Automatization

An effective way to guarantee consistent contributions is to integrate a Pull Request template in the project. This will help the author to submit a healthy PR with a complete description. A PR description should explain what’s the change purpose, the reason behind it, and how to reproduce it. Screenshots and reference links (Git issue, design file, and so on) are the final touches for a self-explanatory submission.

Doing all of this will prevent early comments from reviewers asking for more details. Another way of making code reviews seem less nitpicky is to use linters to find code formatting and code-quality issues before a reviewer even gets involved. Whenever you see a repetitive comment during the review process, look for ways to minimize it (being with better guidelines or code automatization).

Stay A Student

Anyone can do a code review, and everyone must receive a code review — no matter the seniority level. Receive any feedback gratefully as an opportunity to learn and to share knowledge. Look at any feedback as an open discussion rather than a defensive reaction. As Ryan Holiday says:

“An amateur is defensive. The professional finds learning (and even, occasionally, being shown up) to be enjoyable; they like being challenged and humbled, and engage in education as an ongoing and endless process. (…)”

— Ryan Holiday, Ego Is the Enemy

Stay humble because the second you stop being a student, your knowledge becomes fragile.

The Art Of Selecting Reviewers

In my opinion, picking the reviewers is one of the most important decisions for an effective and healthy code review as a team.

Let’s say your colleague is submitting a code review and decides to tag “everyone” because, unconsciously, she/he might want to speed up the process, deliver the best code possible or making sure everyone knows about the code change. Each one of the reviewers receives a notification, then opens the PR link and sees a lot of people tagged as reviewers. The thought of “someone else will do it” pops up in their minds, leading to ignore the code review and close the link.

Since nobody started the review yet, your colleague will remind each one of the reviewers to do it, i.e. creating pressure on them. Once the reviewers start to do it, the review process takes forever because everyone has their own opinion and it’s a nightmare to reach a common agreement. Meanwhile, whoever decided to not review the code, is then spammed with zillions of e-mail notifications with all of the review comments, thus creating noise in their productivity.

This is something I see happening more than I’d like: Pull Requests with a bunch of people tagged as reviewers and ending, ironically, in a non-productive code review.

There are some common effective flows when selecting the reviewers: A possible flow is to pick two to three colleagues who are familiar with the code and ask them to be reviewers. Another approach, explained by Gitlab team is to have a chained review flow in which the author picks one reviewer who picks another reviewer until at least two reviewers agree with the code. Regardless of the flow you choose, avoid having too many reviewers. Being able to trust your colleagues’ code’s judgment is the key to conduct an effective and healthy code review.

Have Empathy

Spotting pieces of code to improve is just a part of a successful code review. Just as important is to communicate that feedback in a healthy way by showing empathy towards your colleagues.

Before writing a comment, remember to put yourself in the other people’s shoes. It’s easy to be misunderstood when writing, so review your own words before sending them. Even if a conversation starts being ugly, don’t let it affect you — always stay respectful. Speaking well to others is never a bad decision.

Know How To Compromise

When a discussion isn’t solved quickly, take it to a personal call or chat. Analyze together if it’s a subject worth paralyzing the current change request or if it can be addressed in another one.

Be flexible but pragmatic and know how to balance efficiency (delivering) and effectiveness (quality). It’s a compromise to be made as a team. In these moments I like to think of a code review as an iteration rather than a final solution. There’s always room for improvement in the next round.

In-Person Code Reviews

Gathering the author and the reviewer together in a pair programming style can be highly effective. Personally, I prefer this approach when the code review involves complex changes or there’s an opportunity for a large amount of knowledge sharing. Despite this being an offline code review, it’s a good habit to leave online comments about the discussions taken, especially when changes need to be made after the meeting. This is also useful to keep other online reviewers up to date.

Learn From Code Review Outcomes

When a code review has suffered a lot of changes, took too long or has already had too many discussions, gather your team together and analyze the causes and which actions can be taken from it. When the code review is complex, splitting a future task into smaller tasks makes each code review easier.

When the experience gap is big, adopting pair programming is a strategy with incredible results — not only for knowledge sharing but also for off-line collaboration and communication. Whatever the outcome, always aim for a healthy dynamic team with clear expectations.

📝 As An Author

One of the main concerns when working on a code review as an author is to minimize the reviewer’s surprise when reading the code for the first time. That’s the first step to a predictable and smooth code review. Here’s how you can do it.

Establish Early Communication

It’s never a bad idea to talk with your future reviewers before coding too much. Whenever it’s an internal or external contribution, you could do a refinement together or a little bit of pair programming at the beginning of the development to discuss solutions.

There’s nothing wrong in asking for help as a first step. In fact, working together outside the review is a first important step to prevent early mistakes and guarantee an initial agreement. At the same time, your reviewers get aware of a future code review to be made.

Follow The Guidelines

When submitting a Pull Request to be reviewed, remember to add a description and to follow the guidelines. This will save the reviewers from spending time to understand the context of the new code. Even if your reviewers already know what it is about, you can also take this opportunity as a way to improve your writing and communication skills.

Be Your First Reviewer

Seeing your own code in a different context allows you to find things you would miss in your code editor. Do a code review of your own code before asking your colleagues. Have a reviewer mindset and really go through every line of code.

Personally, I like to annotate my own code reviews to better guide my reviewers. The goal here is to prevent comments related to a possible lack of attention and making sure you followed the contribution guidelines. Aim to submit a code review just as you would like to receive one.

Be Patient

After submitting a code review, don’t jump right into a new private message asking your reviewers to “take a look, it only takes a few minutes” and indirectly craving for that thumbs-up emoji. Trying to rush your colleagues to do their work is not a healthy mindset. Instead, trust your colleagues’ workflow as they trust you to submit a good code review. Meanwhile, wait for them to get back to you when they are available. Don’t look at your reviewers as a bottleneck. Remember to be patient because hard things take time.

Be A Listener

Once a code review is submitted, comments will come, questions will be asked, and changes will be proposed. The golden rule here is to not take any feedback as a personal attack. Remember that any code can benefit from an outside perspective.

Don’t look at your reviewers as an enemy. Instead, take this moment to set aside your ego, accept that you make mistakes, and be open to learning from your colleagues, so that you can do a better job the next time.

👀 As A Reviewer

Plan Ahead Of Your Time

When you are asked to be a reviewer, don’t interrupt things right away. That’s a common mistake I see all the time. Reviewing code demands your full attention, and each time you switch code contexts, you are decreasing your productivity by wasting time in recalibrating your focus. Instead, plan ahead by allocating time slots of your day to do code reviews.

Personally, I prefer to do code reviews first thing in the morning or after lunch before picking any other of my tasks. That’s what works best for me because my brain is still fresh without a previous code context to switch off from. Besides that, once I’m done with the review, I can focus on my own tasks, while the author can reevaluate the code based on the feedback.

Be Supportive

When a Pull Request doesn’t follow the contribution guidelines, be supportive — especially to newcomers. Take that moment as an opportunity to guide the author to improve his/her contribution. Meanwhile, try to understand why the author didn’t follow the guidelines in the first place. Perhaps there’s room for improvement that you were not aware of before.

Check Out The Branch And Run It

While reviewing the code, make it run on your own computer — especially when it involves user interfaces. This habit will help you to better understand the new code and spot things you might miss if you just used a default diff-tool in the browser which limits your review scope to the changed code (without having the full context as in your code editor).

Ask Before Assuming

When you don’t understand something, don’t be afraid to say it and ask questions. When asking, remember to first read the surrounding code and avoid making assumptions.

Most of the questions fit in these two types of categories:

  1. “How” Questions
    When you don’t understand how something works or what it does, evaluate with the author if the code is clear enough. Don’t mistake simple code with ignorance. There’s a difference between code that is hard to read and code that you are not aware of. Be open to learn and use a new language feature, even if you don’t know it deeply yet. However, use it only if it simplifies the codebase.
  2. “Why” Questions
    When you don’t understand the “why”, don’t be afraid of suggesting to commenting the code, especially if it’s an edge case or a bug fix. The code should be self-explanatory when it comes to explaining what it does. Comments are a complement to explaining the why behind a certain approach. Explaining the context is highly valuable when it comes to maintainability and it will save someone from deleting a block of code that thought was useless. (Personally, I like to comment on the code until I feel safe about forgetting it later.)

Constructive Criticism

Once you find a piece of code you believe it should be improved, always remember to recognize the author’s effort in contributing to the project and express yourself in a receptive and transparent way.

  • Open discussions.
    Avoid formalizing your feedback as a command or accusation such as “You should…” or “You forgot…”. Express your feedback as an open discussion instead of a mandatory request. Remember to comment on the code, not the author. If the comment is not about the code, then it shouldn’t belong in a code review. As said before, be supportive. Using a passive voice, talking in the plural, expressing questions or suggestions are good approaches to emphasize empathy with the author.

Instead of “Extract this method from here…” prefer “This method should be extracted…” or “What do you think of extracting this method…”

  • Be clear.
    A feedback can easily be misinterpreted, especially when it comes to expressing an opinion versus sharing a fact or a guideline. Always remember to clear that right away on your comment.

Unclear: “This code should be….”

Opinion: “I believe this code should be…”

Fact: “Following [our guidelines], this code should be…”.

  • Explain why.
    What’s obvious for you, might not be for others. It’s never too much explaining the motivation behind your feedback, so the author not only understands how to change something but also what’s the benefit from it.

Opinion: “I believe this code could be…”

Explained: “I believe this code could be (…) because this will improve readability and simplify the unit tests.”

  • Provide examples.
    When sharing a code feature or a pattern which the author is not familiar with, complement your suggestion with a reference as guidance. When possible, go beyond MDN docs and share a snippet or a working example adapted to the current code scenario. When writing an example is too complex, be supportive and offer to help in person or by a video call.

Besides saying something such as “Using filter will help us to [motivation],” also say, “In this case, it could be something like: [snippet of code]. You can check [an example at Finder.js]. Any doubt, feel free to ping me on Slack.”

  • Be reasonable.
    If the same error is made multiple times, prefer to just leave one comment about it and remember the author to review it in the other places, too. Adding redundant comments only creates noise and might be contra-productive.

Keep The Focus

Avoid proposing code changes unrelated to the current code. Before suggesting a change, ask yourself if it’s strictly necessary at that moment. This type of feedback is especially common in refactors. It’s one of the many trade-offs between efficiency and effectiveness that we need to make as a team.

When it comes to refactors, personally, I tend to prefer small but effective improvements. Those are easier to review and there’s less chance of having code conflicts when updating the branch with the target branch.

Set Expectations

If you leave a code review half-done, let the author know about it, so time expectations are under control. In the end, also let the author know if you agree with the new code or if you would like to re-review it once again later.

Before approving a code review, ask yourself if you are comfortable about the possibility of touching that code in the future. If yes, that’s a sign you did a successful code review!

Learn To Refuse A Code Review

Although nobody admits it, sometimes you have to refuse a code review. The moment you decide to accept a code review but try to rush it, the project’s quality is being compromised as well as your team’s mindset.

When you accept to review someone’s else code, that person is trusting your capabilities — it’s a commitment. If you don’t have the time to be a reviewer, just say no to your colleague(s). I really mean it; don’t let your colleagues wait for a code review that will never be done. Remember to communicate and keep expectations clear. Be honest with your team and — even better — with yourself. Whatever your choice, do it healthily, and do it right.

Conclusion

Given enough time and experience, doing code reviews will teach you much more than just technical knowledge. You’ll learn to give and receive feedback from others, as well as make decisions with more thought put into it.

Each code review is an opportunity to grow as a community and individually. Even outside code reviews, remember to foster a healthy mindset until the day it comes naturally to you and your colleagues. Because sharing is what makes us better!

Further Reading on SmashingMag:

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, Bringing A Healthy Code Review Mindset To Your Team

UX Optimizations For Keyboard-Only And Assistive Technology Users

dreamt up by webguru in Uncategorized | Comments Off on UX Optimizations For Keyboard-Only And Assistive Technology Users

UX Optimizations For Keyboard-Only And Assistive Technology Users

UX Optimizations For Keyboard-Only And Assistive Technology Users

Aaron Pearlman



(This is a sponsored article.) One of the cool things about accessibility is that it forces you to see and think about your application beyond the typical sighted, mouse-based user experience. Users who navigate via keyboard only (KO) and/or assistive technology (AT) are heavily dependent not only on your application’s information architecture being thoughtful, but the affordances your application makes for keeping the experience as straightforward as possible for all types of users.

In this article, we’re going to go over a few of those affordances that can make your KO/AT user experiences better without really changing the experience for anyone else.

Additions To Your Application’s UX

These are features that you can add to your application to improve the UX for KO/AT users.

A skip link is a navigation feature that invisibly sits at the top of websites or applications. When it is present, it is evoked and becomes visible on your application’s first tab stop.

A skip link allows your user to “skip” to various sections of interest within the application without having to tab-cycle to it. The skip link can have multiple links to it if your application has multiple areas of interest you feel your users should have quick access to your application’s point of entry.

For KO/AT users, this is a helpful tool to both allow them to rapidly traverse your app and can help orient them to your application’s information architecture. For all other users, they likely will never even know this feature exists.

Here’s an example with how we handle skip links. After you click the link, hit Tab ⇥ and look in the upper-left corner. The skip link has two links: Main Content and Code Samples. You can use Tab ⇥ to move between them, hit Enter to navigate to the link.

Screengrab of the skip link navigation menu from Deque’s accessible pattern library with clear focus indication and the option to navigate directly to the main content or code samples

Shortcuts/Hotkey Menus

This is a feature that I think everyone is familiar with: shortcuts and hotkeys. You’ve likely used them from time to time, they are very popular amongst power users of an application, and come in a variety of incarnations.

For KO/AT users, shortcuts/hotkeys are invaluable. They allow them to use the applications, as intended, without having to visually target anything or tab through the application to get to an element or content. While frequent actions and content are always appreciated when represented in a shortcut/hotkey menu, you may also want to consider some slightly less frequent actions that may be buried in your UI (for good reason) but are still something that a user would want to be able to access.

Making shortcuts for those functions will be extremely helpful to KO/AT users. You can make the command a bit more involved, such as using (3) keystrokes to evoke it, to imply it’s a less frequently used piece of functionality. If you have a shortcut/hotkey menu make sure to find a way to promote it in your applications so your users, especially your KO/AT users, can find it and use it effectively.

User Education

User education refers to a piece of functionality that directs the users on what to do, where to go, or what to expect. Tooltips, point outs, info bubbles, etc. are all examples of user education.

One thing you should ask yourself when designing, placing, and/or writing copy for your user education is:

“If I couldn’t see this, would it still be valuable to understand ______?”

Many times it’s just reorienting the user education through that lens that can lead to a much better experience for everyone. For example, rather than saying “Next, click on the button below,” you may want to write, “To get started, click the START button.” The second method removes the visual orientation and instead focus on the common information both a sighted and KO/AT user would have at their disposal.

Note: I should mention it’s perfectly OK to use user education features, like point outs, to visually point out things on the application just make sure the companion text lets your KO/AT users understand the same things which are referred to visually.

See the Pen ftpo demo by Harris Schneiderman.

See on Codepen

Augmentations To Your Application’s UX

There are changes or tweaks you can make to common components/features to improve the UX for KO/AT users.

Now we’re getting into the nitty-gritty. One of the great things about accessibility is how it opens the doorway to novel ways to solve problems you may have not considered before. You can make something fully WCAG 2.0 AA accessible and solve the problem with very different approaches. For modals, we at Deque came up with an interesting approach that would be totally invisible to most sighted-users but would be noticed by KO/AT users almost immediately.

For a modal to be accessible it needs to announce itself when evoked. Two common ways to do this are: focus the modal’s body after the modal is open or focus the modal’s header (if it has one) after the modal is open. You do this so the user’s AT can read out the modal’s intent like “Edit profile” or “Create new subscription”.

After you focus the body or header, hitting Tab ⇥ will send focus to the next focusable element in the modal — commonly a field or, if it’s in the header, sometimes it’s the close button (X). Continuing to tab will move you through all the focusable elements in the modal, typically finishing with terminal buttons like SAVE and/or CANCEL.

Now we get to the interesting part. After you focus the final element in the modal, hitting Tab ⇥ again will “cycle” you back to the first tab stop, which in the case of the modal will be either the body or the header because that’s where we started. However, in our modals we “skip” that initial tab stop and take you to the second stop (which in our modals is the close (X) in the upper corner. We do this because the modal doesn’t need to keep announcing itself over and over each cycle. It only needs to do it on the initial evocation not on any subsequent trips through, so we have a special programmatic stop which skips itself after the first time.

This is a small (but appreciated) usability improvement we came up with exclusively for KO/AT users which would be completely unknown to everyone else.

See the Pen modal demo by Harris Schneiderman.

See on Codepen

Navigation menus are tricky. They can be structured in a multitude of ways, tiered, nested, and have countless mechanisms of evocation, disclosure, and traversing. This makes it important to consider how they are interacted with and represented for KO/AT users during the design phase. Good menus should be “entered” and “exited”, meaning you tab into a menu to use it and tab out of it to exit it (if you don’t use it).

This idea is best illustrated with a literal example, so let’s take a look at our 2-tier, vertical navigation from Cauldron.

  1. Go to https://pattern-library.dequelabs.com/;
  2. Hit Tab ⇥ three times. The first tab stop is the skip link (which we went over previously), the second is the logo which acts as “return to home” link, and the third tab enters the menu;
  3. Now that you are in the menu, use the arrow keys ( and ) to move and open sections of the menu;
  4. Hitting Tab ⇥ at any point will exit you from the menu and send you to the content of the page.

Screengrab of the navigation menu from Deque’s accessible pattern library with the Design Principles navigation item highlighted and expanded to show it subpages: Colors and Typography

Navigation menus can also work in conjunction with some of the previous topics such as shortcut/hotkey menus to make using the menu even more efficient.

Logical Focus Retention (I.E. Deleting Row, Returning Back To A Page)

Focus retention is very important. Most people are familiar, at least in concept, with focusing elements in their logical intended order on the page; however, it can get murky when an element or content changes/appears/disappears.

  • Where does focus go when then field you are on is deleted?
  • What about when you’re sent to another tab where the application has a new context?
  • What about after a modal is closed due to a terminal action like SAVE?

For a sighted user there are visual cues which can inform them of what happened.

Here’s an example: You have an Edit Recipe modal which lets your user add and remove any ingredients. There is one ingredient field with an “Add another ingredient” button below it. (Yes, it’s styled as a link but that’s a topic for another day.) Your focus is on the button. You click the button and a new field appears between the button and the first field. Where should the focus go? Most likely your user added another ingredient to engage with it so the focus should shift from the button into the newly added field.

See the Pen focus retention by Harris Schneiderman.

See on Codepen

The big takeaway from all this isn’t so much the specific examples but the mentality which supports them — consider the UX for your application through the lens of the KO/AT user as well sighted, mouse-only user. Some of the best and most clever ideas come from the most interesting and important challenges.

If you need help ensuring that your features are accessible, all of the examples above plus countless more can be tested using Deque’s free web accessibility testing application: axe pro. It’s free and you can sign up here.

Smashing Editorial
(ra, il)

Source: Smashing Magazine, UX Optimizations For Keyboard-Only And Assistive Technology Users

Styling In Modern Web Apps

dreamt up by webguru in Uncategorized | Comments Off on Styling In Modern Web Apps

Styling In Modern Web Apps

Styling In Modern Web Apps

Ajay Ns



If you search for how to style apps for the web, you’ll come across many different approaches and libraries, some even changing day by day. Block Element Modifier (BEM); preprocessors such as Less and SCSS; CSS-in-JS libraries, including JSS and styled-components; and, lately, design systems. You come across all of these in articles and blogs, tutorials and talks, and — of course — debates on Twitter.

How do we choose between them? Why do so many approaches exist in the first place? If you’re already comfortable with one method, why even consider moving to another?

In this article, I’m going to take a look at the tools I have used for production apps and sites I’ve worked on, comparing features from what I’ve actually encountered rather than summarizing the content from their readmes. This is my journey through BEM, SCSS, styled-components, and design systems in particular; but note that even if you use different libraries, the basic principles and approach remain the same for each of them.

CSS Back In The Day

When websites were just getting popular, CSS was primarily used to add funky designs to catch the user’s attention, such as neon billboards on a busy street:

Microsoft’s first site (left) and MTV’s site from the early 2000s. (Large preview)

Its use wasn’t for layout, sizing, or any of the basic needs we routinely use CSS for today, but as an optional add-on to make things fancy and eye-catching. As features were added to CSS, newer browsers supporting a whole new range of functionality and features appeared, and the standard for websites and user interfaces evolved — CSS became an essential part of web development.

It’s rare to find websites without a minimum of a couple hundred lines of custom styling or a CSS framework (at least, sites that don’t look generic or out of date):

Wired’s modern responsive site from InVision’s responsive web design examples post. (Large preview)

What came next is quite predictable. The complexity of user interfaces kept on increasing, along with the use of CSS; but without any guidelines suggested and with a lot of flexibility, styling became complicated and dirty. Developers had their own ways of doing things, with it all coming down to somehow getting things to look the way the design said it was supposed to be.

This, in turn, led to a number of common issues that many developers faced, like managing big teams on a project, or maintaining a project over a long period of time, while having no clear guides. One of the main reasons this happens even now, sadly, is that CSS is still often dismissed as unimportant and not worth paying much attention to.

CSS Is Not Easy To Manage

There’s nothing built into CSS for maintenance and management when it comes to large projects with teams, and so the common problems faced with CSS are:

  • Lack of code structure or standards greatly reduces readability;
  • Maintainability as project size increases;
  • Specificity issues due to code not being readable in the first place.

If you’ve worked with Bootstrap, you’ll have noticed you’re unable to override the default styles and you might have fixed this by adding !important or considering the specificity of selectors. Think of a big project’s style sheets, with their large number of classes and styles applied to each element. Working with Bootstrap would be fine because it has great documentation and it aims to be used as a solid framework for styling. This obviously won’t be the case for most internal style sheets, and you’ll be lost in a world of cascaded styles.

In projects, this would be like a couple thousand lines of CSS in a single file, with comments if you’re lucky. You could also see a couple of !important used to finally get certain styles to work overriding others.

!important does not fix bad CSS. (Large preview)

You may have faced specificity issues but not understood how specificity works. Let’s take a look.

(Large preview)

Which of the styles applied to the same element would be applied to the image on the right, assuming they both point to it?

What is the order of weight of selectors such as inline styles, IDs, classes, attributes, and elements? Okay, I made it easy there, they’re in order of weight:

Start at 0; add 1,000 for a style attribute; add 100 for each id; add 10 for each attribute, class or pseudo-class; add 1 for each element name or pseudo-element.

So, for instance, taking the above example:

(Large preview)

(Large preview)

Do you see why the second example was the correct answer? The id selector clearly has far more weight than element selectors. This is essentially the reason why your CSS rule sometimes doesn’t seem to apply. You can read about this in detail in Vitaly Friedman’s article, “CSS Specificity: Things You Should Know”.

The larger the codebase, the greater the number of classes. These might even apply to or override different styles based on specificity, so you can see how quickly it can become difficult to deal with. Over and above this we deal with code structure and maintainability: it’s the same as the code in any language. We have atomic design, web components, templating engines; there’s a need for the same in CSS, and so we’ve got a couple of different approaches that attempt to solve these different problems.

Block Element Modifier (BEM)

“BEM is a design methodology that helps you to create reusable components and code sharing in front-end development.”

— getbem.com

The idea behind BEM is to create components out of parts of apps that are reused or are independent. Diving in, the design process is similar to atomic design: modularize things and think of each of them as a reusable component.

I chose to start out managing styles with BEM as it was very similar the way React (that I was already familiar with) breaks down apps into reusable components.

(Large preview)

Using BEM

BEM is nothing more than a guide: no new framework or language to learn, just CSS with a naming convention to organize things better. Following this methodology, you can implement the patterns you’ve already been using, but in a more structured manner. You can also quite easily do progressive enhancements to your existing codebase as it requires no additional tooling configuration or any other complexities.

Advantages

  • At its heart BEM manages reusable components, preventing random global styles overriding others. In the end, we have more predictable code, which solves a lot of our specificity problems.
  • There’s not much of a learning curve; it is just the same old CSS with a couple of guides to improve maintainability. It is an easy way of making code modular by using CSS itself.

Drawbacks

  • While promoting reusability and maintainability, a side effect of the BEM naming principle is making naming the classes difficult and time-consuming.
  • The more nested your component is in the block, the longer and more unreadable the class names become. Deeply nested or grandchild selectors often face this issue.

Lorem ipsum lorem

</div>

That was just a quick intro to BEM and how it solves our problems. If you’d like to take a deeper look into its implementation, check out “BEM For Beginners” published here in Smashing Magazine.

Sassy CSS (SCSS)

Put blandly, SCSS is CSS on steroids. Additional functionality such as variables, nesting selectors, reusable mixins, and imports help SCSS make CSS more of a programming language. For me, SCSS was fairly easy to pick up (go through the docs if you haven’t already) and once I got to grips with the additional features, I’d always prefer to use it over CSS just for the convenience it provided. SCSS is a preprocessor, meaning the compiled result is a plain old CSS file; the only thing you need to set up is tooling to compile down to CSS in the build process.

Super handy features

  • Imports help you split style sheets into multiple files for each component/section, or whichever makes readability easier.
// main.scss
@import "_variables.scss";
@import "_mixins.scss";

@import "_global.scss";
@import "_navbar.scss";
@import "_hero.scss";
  • Mixins, loops, and variables help with the DRY principle and also make the process of writing CSS easier.
@mixin flex-center($direction) {
  display: flex;
  align-items: center;
  justify-content: center;
  flex-direction: $direction;
}

.box {
  @include flex-center(row);
}

Note: SCSS mixin: Check out more handy SCSS features over here.

  • Nesting of selectors improves readability as it works the same way HTML elements are arranged: in a nested fashion. This approach helps you recognize hierarchy at a glance.

BEM With SCSS

It is possible to group the code for components into blocks, and this greatly improves readability and helps in ease of writing BEM with SCSS:

This code is relatively less heavy and complicated as opposed to nesting multiple layers. (Large preview)

Note: For further reading on how this would work, check out Victor Jeman’s “BEM with Sass” tutorial.

Styled-Components

This is one of the most widely used CSS-in-JS libraries. Without endorsing this particular library, it has worked well for me, and I’ve found its features quite useful for my requirements. Take your time in exploring other libraries out there and pick the one that best matches your needs.

I figured a good way to get to know styled-components was to compare the code to plain CSS. Here’s a quick look at how to use styled-components and what it’s all about:

Instead of adding classes to elements, each element with a class is made into a component. The code does look neater than the long class names we have with BEM.

Interestingly, what styled-components does under the hood is take up the job of adding relevant classes to elements as per what’s specified. This is essentially what we do in style sheets (note that it is in no way related to inline styling).

import styled from 'styled-components';
    
const Button = styled.button`
  background-color: palevioletred;
  color: papayawhip;
`;

// Output
<style>
  .dRUXBm {
    background-color: palevioletred;
    color: papayawhip;
  }
</style>

<button class="dRUXBm" />

Note: Styled-components under the hood: read more about inline styling vs CSS-in-JS in “Writing your styles in JS ≠ writing inline styles” by Max Stoiber.

Why Are Styled-Components One Of The Widely Used CSS-In-JS Libraries?

For styling, here we’re just using template literals with normal CSS syntax. This allows you to use the full power of JavaScript to handle the styling for you: conditionals, properties passed in as arguments (using an approach similar to React), and essentially all the functionality that can be implemented by JavaScript.

While SCSS has variables, mixins, nesting, and other features, styled-components only adds on, making it even more powerful. Its approach, based heavily on components, might seem daunting at first since it’s different from traditional CSS. But as you get used to the principles and techniques, you’ll notice that everything possible with SCSS can be done in styled-components, as well as a lot more. You just use JavaScript instead.

const getDimensions = size => {
  switch(size) {
    case 'small': return 32;
    case 'large': return 64;
    default: return 48;
  }
}

const Avatar = styled.img`
  border-radius: 50%;
  width: ${props => getDimensions(props.size)}px;
  height: ${props => getDimensions(props.size)}px;
`;

const AvatarComponent = ({ src }) => (
  <Avatar src={src} size="large" />
);

Note: Template literals in styled-components allow you to use JS directly for conditional styling and more.

Another thing to note is how perfectly this fits into the world of web components. A React component has its own functionality with JavaScript, JSX template, and now CSS-in-JS for styling: it’s fully functional all by itself and handles everything required internally.

Without us realizing, that was the answer to a problem we’ve been talking about for too long: specificity. While BEM was a guideline to enforce component-based structure for elements and styles but still relying on classes, styled-components impose it for you.

Styled-components has a couple of additional features I’ve found especially useful: themes, which can configure a global prop (variable) passed down to each styled-component; automatic vendor prefixing; and automatic clean-up of unused code. The amazingly supportive and active community is just the icing on the cake.

Note: There’s a lot more to check out and take advantage of. Read more in the styled-components docs.

Quick Recap On Features
  • Template literals are used for syntax, the same as traditional CSS.
  • It imposes modular design.
  • It solves specificity issues by handling class names for you.
  • Everything that can be done with SCSS and more, implemented with JS.
Why It Might Not Be For You
  • It obviously relies on JavaScript, which means without it the styles don’t load, resulting in a cluttered mess.
  • Previously readable class names are replaced with hashes that really have no meaning.
  • The concept of components rather than cascading classes might be a bit hard to wrap your head around, especially since this affects the way you arrange things a lot.

Design Systems

As usage of web components increases, as well as the need for atomic design (basically breaking down UI into basic building blocks), many companies are choosing to create component libraries and design systems. Unlike the technologies or approaches on how to handle styling mentioned above, design systems represent an organizational approach to handling components and consistent design across whole platforms or apps.

It’s possible to use an approach we’ve discussed within a design system to organize styles, while the design system itself focuses on the building blocks of the apps rather than internal implementations.

“Design systems are essentially collections of rules, constraints, and principles implemented in design and code.”

Design systems and component libraries are aimed at whole ecosystems spanning different platforms and media, and can determine the overall outlook of the company itself.

“A design system is an amalgamation of style, components, and voice.”

IBM’s Carbon design system

IBM’s Carbon design system (Large preview)

Once they gain momentum, tech businesses sometimes have to scale up extremely fast, and it can be hard for the design and development teams to keep up. Different apps, new features and flows, constant reevaluation, changes, and enhancements are to be shipped as rapidly as possible when the business requires them.

Take the case of a simple modal; for instance, one screen has a confirmation modal which simply accepts a negative or positive action. This is worked on by Developer A. Then the design team ships another screen that has a modal comprising a small form with a few inputs — Developer B takes this up. Developers A and B work separately and have no idea that both of them are working on the same pattern, building two different components that are essentially the same at base level. If different developers worked on the different screens we might even see UI inconsistencies committed to the codebase.

Now imagine a large company of multiple designers and developers — you could end up with a whole collection of components and flows that are supposed to be consistent, but instead are distinct components existing independently.

The main principle of design systems is to help meet business requirements: build and ship new features, functionality, and even full apps while maintaining standards, quality, and consistency of design.

Here are some examples of popular design systems:

When talking about styling in particular, we’d be specifically interested in the component library part, although the design system itself is far more than just a collection of components. A component handles its functionality, template, and styling internally. A developer working on the app needn’t be aware of all of the internal working of the components, but would just need to know how to put them together within the app.

Now imagine a couple of these components that are further made reusable and maintainable, and then organized into a library. Developing apps could be almost as simple as drag-and-drop (well, not exactly, but a component could be pulled in without worrying about any internal aspects of its working). That’s essentially what a component library does.

(Large preview)

(Large preview)

Why You Might Want To Think About Building A Design System

  • As mentioned earlier, it helps the engineering and design teams keep up with rapidly changing business needs while maintaining standards and quality.
  • It ensures consistency of design and code throughout, helping considerably with maintainability over a long period of time.
  • One of the greatest features of design systems is that they bring the design and development teams closer, making work more of a continuous collaboration. Rather than whole pages of mock-ups given to developers to work on from scratch, components and their behaviors are well-defined first. It’s a whole different approach, but a better and faster way to develop consistent interfaces.

Why It Might Not Be The Best Fit For You

  • Design systems require a lot of time and effort up front to plan things out and organize from scratch — both code- and design-wise. Unless really required, it might not be worth delaying the development process to focus on building the design system at first.
  • If a project is relatively small, a design system can add unnecessary complexity and end up a waste of effort when what was actually required was just a couple of standards or guidelines to ensure consistency. Because several high-profile companies have adopted design systems, the hype can influence developers into thinking this is the always best approach, without analyzing the requirements and ensuring this could actually be practical.

Different appearances of a button

Different appearances of a button (Large preview)

Conclusion

Styling applications is a world in itself, one not often given the importance and attention it deserves. With complex modern user interfaces, it’s only matter of time before your app becomes a mess of unordered styles, reducing consistency and making it harder for new code to be added or changes made to the existing codebase.

Distilling what we’ve discussed so far: BEM, along with SCSS, could help you organize your style sheets better, take a programming approach to CSS, and create meaningful structured class names for cleaner code with minimal configuration. Building over a front-end framework like React or Vue, you might it find it convenient to hand class naming to a CSS-in-JS library if you’re comfortable with a component-based approach, putting an end to all your specificity issues, along with a couple of other benefits. For larger applications and multiple platforms, you might even consider building a design system in combination with one of the other methods, to boost development speeds while maintaining consistency.

Essentially, depending on your requirements and the size and scale of your software, it’s important to spend time determining your best approach to styling.

Smashing Editorial
(dm, og, yk, il)

Source: Smashing Magazine, Styling In Modern Web Apps

Web Accessibility In Context

dreamt up by webguru in Uncategorized | Comments Off on Web Accessibility In Context

Web Accessibility In Context

Web Accessibility In Context

Be Birchall



Haben Girma, disability rights advocate and Harvard Law’s first deafblind graduate, made the following statement in her keynote address at the AccessU digital accessibility conference last month:

“I define disability as an opportunity for innovation.”

She charmed and impressed the audience, telling us about learning sign language by touch, learning to surf, and about the keyboard-to-braille communication system that she used to take questions after her talk.

Contrast this with the perspective many of us take building apps: web accessibility is treated as an afterthought, a confusing collection of rules that the team might look into for version two. If that sounds familiar (and you’re a developer, designer or product manager), this article is for you.

I hope to shift your perspective closer to Haben Girma’s by showing how web accessibility fits into the broader areas of technology, disability, and design. We’ll see how designing for different sets of abilities leads to insight and innovation. I’ll also shed some light on how the history of browsers and HTML is intertwined with the history of assistive technology.

Assistive Technology

An accessible product is one that is usable by all, and assistive technology is a general term for devices or techniques that can aid access, typically when a disability would otherwise preclude it. For example, captions give deaf and hard of hearing people access to video, but things get more interesting when we ask what counts as a disability.

On the ‘social model’ definition of disability adopted by the World Health Organization, a disability is not an intrinsic property of an individual, but a mismatch between the individual’s abilities and environment. Whether something counts as a ‘disability’ or an ‘assistive technology’, doesn’t have such a clear boundary and is contextual.

Addressing mismatches between ability and environment has lead to not only technological innovations but also to new understandings of how humans perceive and interact with the world.

Access + Ability, a recent exhibit at the Cooper Hewitt Smithsonian design museum in New York, showcased some recent assistive technology prototypes and products. I’d come to the museum to see a large exhibit on designing for the senses, and ended up finding that this smaller exhibit offered even more insight into the senses by its focus on cross-sensory interfaces.

Seeing is done with the brain, and not with the eyes. This is the idea behind one of the items in the exhibit, Brainport, a device for those who are blind or have low vision. Your representation of your physical environment from sight is based on interpretations your brain makes from the inputs that your eyes receive.

What if your brain received the information your eyes typically receive through another sense? A camera attached to Brainport’s headset receives visual inputs which are translated into a pixel-like grid pattern of gentle shocks perceived as “bubbles” on the wearer’s tongue. Users report being able to “see” their surroundings in their mind’s eye.

The Brainport is a camera attached to the forehead connected to a rectangular device that comes in contact with the wearer’s tongue.

Brainport turns images from a camera into a pixel-like pattern of gentle electric shocks on the tongue. (Image Credit: Cooper Hewitt)(Large preview)

Soundshirt also translates inputs typically perceived by one sense to inputs that can be perceived by another. This wearable tech is a shirt with varied sound sensors and subtle vibrations corresponding to different instruments in an orchestra, enabling a tactile enjoyment of a symphony. Also on display for interpreting sound was an empathetically designed hearing aid that looks like a piece of jewelry instead of a clunky medical device.

Designing for different sets of abilities often leads to innovations that turn out to be useful for people and settings beyond their intended usage. Curb cuts, the now familiar mini ramps on the corners of sidewalks useful to anyone wheeling anything down the sidewalk, originated from disability rights activism in the ’70s to make sidewalks wheelchair accessible. Pellegrino Turri invented the early typewriter in the early 1800s to help his blind friend write legibly, and the first commercially available typewriter, the Hansen Writing Ball, was created by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes.

Vint Cerf cites his hearing loss as shaping his interest in networked electronic mail and the TCP/IP protocol he co-invented. Smartphone color contrast settings for color blind people are useful for anyone trying to read a screen in bright sunlight, and have even found an unexpected use in helping people to be less addicted to their phones.

The Hansen Writing Ball has brass colored keys arranged as if on the top half of a ball, with a curved sheet of paper resting under them.

The Hansen Writing Ball was developed by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes. (Image Credit: Wikimedia Commons) (Large preview)

So, designing for different sets of abilities gives us new insights into how we perceive and interact with the environment, and leads to innovations that make for a blurry boundary between assistive technology and technology generally.

With that in mind, let’s turn to the web.

Assistive Tech And The Web

The web was intended as accessible to all from the start. A quote you’ll run into a lot if you start reading about web accessibility is:

“The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.”

— Tim Berners-Lee, W3C Director and inventor of the World Wide Web

What sort of assistive technologies are available to perceive and interact with the web? You may have heard of or used a screen reader that reads out what’s on the screen. There are also braille displays for web pages, and alternative input devices like an eye tracker I got to try out at the Access + Ability exhibit.

It’s fascinating to learn that web pages are displayed in braille; the web pages we create may be represented in 3D! Braille displays are usually made of pins that are raised and lowered as they “translate” each small part of the page, much like the device I saw Haben Girma use to read audience questions after her AccessU keynote. A newer company, Blitab (named for “blind” + “tablet”), is creating a braille Android tablet that uses a liquid to alter the texture of its screen.

Haben Girma sits at a conference table and uses her braille reader.

Haben Girma uses her braille reader to have a conversation with AccessU conference participants. (Photo used with her permission.) (Large preview)

People proficient with using audio screen readers get used to faster speech and can adjust playback to an impressive rate (as well as saving battery life by turning off the screen). This makes the screen reader seem like an equally useful alternative mode of interacting with web sites, and indeed many people take advantage of audio web capabilities to dictate or hear content. An interface intended for some becomes more broadly used.

Web accessibility is about more than screen readers, however, we’ll focus on them here because — as we’ll see — screen readers are central to the technical challenges of an accessible web.

Recommended reading: Designing For Accessibility And Inclusion by Steven Lambert

Technical Challenges And Early Approaches

Imagine you had to design a screen reader. If you’re like me before I learned more about assistive tech, you might start by imagining an audiobook version of a web page, thinking your task is to automate reading the words on the page. But look at this page. Notice how much you use visual cues from layout and design to tell you what its parts are for how to interact with them.

  • How would your screen reader know when the text on this page belongs to clickable links or buttons?
  • How would the screen reader determine what order to read out the text on the page?
  • How could it let the user “skim” this page to determine the titles of the main sections of this article?

The earliest screen readers were as simple as the audiobook I first imagined, as they dealt with only text-based interfaces. These “talking terminals,” developed in the mid-’80s, translated ASCII characters in the terminal’s display buffer to an audio output. But graphical user interfaces (or GUI’s) soon became common. “Making the GUI Talk,” a 1991 BYTE magazine article, gives a glimpse into the state of screen readers at a moment when the new prevalence of screens with essentially visual content made screen readers a technical challenge, while the freshly passed Americans with Disabilities Act highlighted their necessity.

OutSpoken, discussed in the BYTE article, was one of the first commercially available screen readers for GUI’s. OutSpoken worked by intercepting operating system level graphics commands to build up an offscreen model, a database representation of what is in each part of the screen. It used heuristics to interpret graphics commands, for instance, to guess that a button is drawn or that an icon is associated with nearby text. As a user moves a mouse pointer around on the screen, the screen reader reads out information from the offscreen model about the part of the screen corresponding to the cursor’s location.

Graphics commands build a GUI from code. Graphics commands are also used to build a database representation of the screen, which can then be used by screen readers.

The offscreen model is a database representation of the screen based on intercepting graphics commands. (Large preview)

This early approach was difficult: intercepting low-level graphics commands is complex and operating system dependent, and relying on heuristics to interpret these commands is error-prone.

The Semantic Web And Accessibility APIs

A new approach to screen readers arose in the late ’90s, based on the idea of the semantic web. Berners-Lee wrote of his dream for a semantic web in his 1999 book Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web — the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.

Berners-Lee defined the semantic web as “a web of data that can be processed directly and indirectly by machines.” It’s debatable how much this dream has been realized, and many now think of it as unrealistic. However, we can see the way assistive technologies for the web work today as a part of this dream that did pan out.

Berners-Lee emphasized accessibility for the web from the start when founding the W3C, the web’s international standards group, in 1994. In a 1996 newsletter to the W3C’s Web Accessibility Initiative, he wrote:

The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. It presents new challenges and new hopes to people with disabilities.

HTML4, developed in the late ’90s and released in 1998, emphasized separating document structure and meaning from presentational or stylistic concerns. This was based on semantic web principles, and partly motivated by improving support for accessibility. The HTML5 that we currently use builds on these ideas, and so supporting assistive technology is central to its design.

So, how exactly do browsers and HTML support screen readers today?

Many front-end developers are unaware that the browser parses the DOM to create a data structure, especially for assistive technologies. This is a tree structure known as the accessibility tree that forms the API for screen readers, meaning that we no longer rely on intercepting the rendering process as the offscreen model approach did. HTML yields one representation that the browser can use both to render on a screen, and also give to audio or braille devices.

HTML yields a DOM tree, which can be used to render a view, and to build up an accessibility tree that assistive tech like screen readers use.

Browsers use the DOM to render a view, and to create an accessibility tree for screen readers. (Large preview)

Let’s look at the accessibility API in a little more detail to see how it handles the challenges we considered above. Nodes of the accessibility tree, called “accessible objects,” correspond to a subset of DOM nodes and have attributes including role (such as button), name (such as the text on the button), and state (such as focused) inferred from the HTML markup. Screen readers then use this representation of the page.

This is how a screen reader user can know an element is a button without making use of the visual style cues that a sighted user depends on. How could a screen reader user find relevant information on a page without having to read through all of it? In a recent survey, screen reader users reported that the most common way they locate the information they are looking for on a page is via the page’s headings. If an element is marked up with an h1h6 tag, a node in the accessibility tree is created with the role heading. Screen readers have a “skip to next heading” functionality, thereby allowing a page to be skimmed.

Some HTML attributes are specifically for the accessibility tree. ARIA (Accessible Rich Internet Applications) attributes can be added to HTML tags to specify the corresponding node’s name or role. For instance, imagine our button above had an icon rather than text. Adding aria-label="sign up" to the button element would ensure that the button had a label for screen readers to represent to their users. Similarly, we can add alt attributes to image tags, thereby supplying a name to the corresponding accessible node and providing alternative text that lets screen reader users know what’s on the page.

The downside of the semantic approach is that it requires developers to use HTML tags and aria attributes in a way that matches their code’s intent. This, in turn, requires awareness among developers, and prioritization of accessibility by their teams. Lack of awareness and prioritization, rather than any technical limitation, is currently the main barrier to an accessible web.

So the current approach to assistive tech for the web is based on semantic web principles and baked into the design of browsers and HTML. Developers and their teams have to be aware of the accessibility features built into HTML to be able to take advantage of them.

Recommended reading: Accessibility APIs: A Key To Web Accessibility by Léonie Watson

AI Connections

Machine Learning (ML) and Artificial Intelligence (AI) come to mind when we read Berners-Lee’s remarks about the dream of the semantic web today. When we think of computers being intelligent agents analyzing data, we might think of this as being done via machine learning approaches. The early offscreen model approach we looked at used heuristics to classify visual information. This also feels reminiscent of machine learning approaches, except that in machine learning, heuristics to classify inputs are based on an automated analysis of previously seen inputs rather than hand-coded.

What if in the early days of figuring out how to make the web accessible we had been thinking of using machine learning? Could such technologies be useful now?

Machine learning has been used in some recent assistive technologies. Microsoft’s SeeingAI and Google’s Lookout use machine learning to classify and narrate objects seen through a smartphone camera. CTRL Labs is working on a technology that detects micro-muscle movements interpreted with machine learning techniques. In this way, it seemingly reads your mind about movement intentions and could have applications for helping with some motor impairments. AI can also be used for character recognition to read out text, and even translate sign language to text. Recent Android advances using machine learning let users augment and amplify sounds around them, and to automatically live transcribe speech.

AI can also be used to help improve the data that makes its way to the accessibility tree. Facebook introduced automatically generated alternative text to provide user images with screen reader descriptions. The results are imperfect, but point in an interesting direction. Taking this one step further, Google recently announced that Chrome will soon be able to supply automatically generated alternative text for images that the browser serves up.

What’s Next

Until (or unless) machine learning approaches become more mature, an accessible web depends on the API based on the accessibility tree. This is a robust solution, but taking advantage of the assistive tech built into browsers requires people building sites to be aware of them. Lack of awareness, rather than any technical difficulty, is currently the main challenge for web accessibility.

Key Takeaways

  • Designing for different sets of abilities can give us new insights and lead to innovations that are broadly useful.
  • The web was intended to be accessible from the start, and the history of the web is intertwined with the history of assistive tech for the web.
  • Assistive tech for the web is baked into the current design of browsers and HTML.
  • Designing assistive tech, particularly involving AI, is continuing to provide new insights and lead to innovations.
  • The main current challenge for an accessible web is awareness among developers, designers, and product managers.

Resources

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Web Accessibility In Context

Web Accessibility In Context

dreamt up by webguru in Uncategorized | Comments Off on Web Accessibility In Context

Web Accessibility In Context

Web Accessibility In Context

Be Birchall



Haben Girma, disability rights advocate and Harvard Law’s first deafblind graduate, made the following statement in her keynote address at the AccessU digital accessibility conference last month:

“I define disability as an opportunity for innovation.”

She charmed and impressed the audience, telling us about learning sign language by touch, learning to surf, and about the keyboard-to-braille communication system that she used to take questions after her talk.

Contrast this with the perspective many of us take building apps: web accessibility is treated as an afterthought, a confusing collection of rules that the team might look into for version two. If that sounds familiar (and you’re a developer, designer or product manager), this article is for you.

I hope to shift your perspective closer to Haben Girma’s by showing how web accessibility fits into the broader areas of technology, disability, and design. We’ll see how designing for different sets of abilities leads to insight and innovation. I’ll also shed some light on how the history of browsers and HTML is intertwined with the history of assistive technology.

Assistive Technology

An accessible product is one that is usable by all, and assistive technology is a general term for devices or techniques that can aid access, typically when a disability would otherwise preclude it. For example, captions give deaf and hard of hearing people access to video, but things get more interesting when we ask what counts as a disability.

On the ‘social model’ definition of disability adopted by the World Health Organization, a disability is not an intrinsic property of an individual, but a mismatch between the individual’s abilities and environment. Whether something counts as a ‘disability’ or an ‘assistive technology’, doesn’t have such a clear boundary and is contextual.

Addressing mismatches between ability and environment has lead to not only technological innovations but also to new understandings of how humans perceive and interact with the world.

Access + Ability, a recent exhibit at the Cooper Hewitt Smithsonian design museum in New York, showcased some recent assistive technology prototypes and products. I’d come to the museum to see a large exhibit on designing for the senses, and ended up finding that this smaller exhibit offered even more insight into the senses by its focus on cross-sensory interfaces.

Seeing is done with the brain, and not with the eyes. This is the idea behind one of the items in the exhibit, Brainport, a device for those who are blind or have low vision. Your representation of your physical environment from sight is based on interpretations your brain makes from the inputs that your eyes receive.

What if your brain received the information your eyes typically receive through another sense? A camera attached to Brainport’s headset receives visual inputs which are translated into a pixel-like grid pattern of gentle shocks perceived as “bubbles” on the wearer’s tongue. Users report being able to “see” their surroundings in their mind’s eye.

The Brainport is a camera attached to the forehead connected to a rectangular device that comes in contact with the wearer’s tongue.

Brainport turns images from a camera into a pixel-like pattern of gentle electric shocks on the tongue. (Image Credit: Cooper Hewitt)(Large preview)

Soundshirt also translates inputs typically perceived by one sense to inputs that can be perceived by another. This wearable tech is a shirt with varied sound sensors and subtle vibrations corresponding to different instruments in an orchestra, enabling a tactile enjoyment of a symphony. Also on display for interpreting sound was an empathetically designed hearing aid that looks like a piece of jewelry instead of a clunky medical device.

Designing for different sets of abilities often leads to innovations that turn out to be useful for people and settings beyond their intended usage. Curb cuts, the now familiar mini ramps on the corners of sidewalks useful to anyone wheeling anything down the sidewalk, originated from disability rights activism in the ’70s to make sidewalks wheelchair accessible. Pellegrino Turri invented the early typewriter in the early 1800s to help his blind friend write legibly, and the first commercially available typewriter, the Hansen Writing Ball, was created by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes.

Vint Cerf cites his hearing loss as shaping his interest in networked electronic mail and the TCP/IP protocol he co-invented. Smartphone color contrast settings for color blind people are useful for anyone trying to read a screen in bright sunlight, and have even found an unexpected use in helping people to be less addicted to their phones.

The Hansen Writing Ball has brass colored keys arranged as if on the top half of a ball, with a curved sheet of paper resting under them.

The Hansen Writing Ball was developed by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes. (Image Credit: Wikimedia Commons) (Large preview)

So, designing for different sets of abilities gives us new insights into how we perceive and interact with the environment, and leads to innovations that make for a blurry boundary between assistive technology and technology generally.

With that in mind, let’s turn to the web.

Assistive Tech And The Web

The web was intended as accessible to all from the start. A quote you’ll run into a lot if you start reading about web accessibility is:

“The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.”

— Tim Berners-Lee, W3C Director and inventor of the World Wide Web

What sort of assistive technologies are available to perceive and interact with the web? You may have heard of or used a screen reader that reads out what’s on the screen. There are also braille displays for web pages, and alternative input devices like an eye tracker I got to try out at the Access + Ability exhibit.

It’s fascinating to learn that web pages are displayed in braille; the web pages we create may be represented in 3D! Braille displays are usually made of pins that are raised and lowered as they “translate” each small part of the page, much like the device I saw Haben Girma use to read audience questions after her AccessU keynote. A newer company, Blitab (named for “blind” + “tablet”), is creating a braille Android tablet that uses a liquid to alter the texture of its screen.

Haben Girma sits at a conference table and uses her braille reader.

Haben Girma uses her braille reader to have a conversation with AccessU conference participants. (Photo used with her permission.) (Large preview)

People proficient with using audio screen readers get used to faster speech and can adjust playback to an impressive rate (as well as saving battery life by turning off the screen). This makes the screen reader seem like an equally useful alternative mode of interacting with web sites, and indeed many people take advantage of audio web capabilities to dictate or hear content. An interface intended for some becomes more broadly used.

Web accessibility is about more than screen readers, however, we’ll focus on them here because — as we’ll see — screen readers are central to the technical challenges of an accessible web.

Recommended reading: Designing For Accessibility And Inclusion by Steven Lambert

Technical Challenges And Early Approaches

Imagine you had to design a screen reader. If you’re like me before I learned more about assistive tech, you might start by imagining an audiobook version of a web page, thinking your task is to automate reading the words on the page. But look at this page. Notice how much you use visual cues from layout and design to tell you what its parts are for how to interact with them.

  • How would your screen reader know when the text on this page belongs to clickable links or buttons?
  • How would the screen reader determine what order to read out the text on the page?
  • How could it let the user “skim” this page to determine the titles of the main sections of this article?

The earliest screen readers were as simple as the audiobook I first imagined, as they dealt with only text-based interfaces. These “talking terminals,” developed in the mid-’80s, translated ASCII characters in the terminal’s display buffer to an audio output. But graphical user interfaces (or GUI’s) soon became common. “Making the GUI Talk,” a 1991 BYTE magazine article, gives a glimpse into the state of screen readers at a moment when the new prevalence of screens with essentially visual content made screen readers a technical challenge, while the freshly passed Americans with Disabilities Act highlighted their necessity.

OutSpoken, discussed in the BYTE article, was one of the first commercially available screen readers for GUI’s. OutSpoken worked by intercepting operating system level graphics commands to build up an offscreen model, a database representation of what is in each part of the screen. It used heuristics to interpret graphics commands, for instance, to guess that a button is drawn or that an icon is associated with nearby text. As a user moves a mouse pointer around on the screen, the screen reader reads out information from the offscreen model about the part of the screen corresponding to the cursor’s location.

Graphics commands build a GUI from code. Graphics commands are also used to build a database representation of the screen, which can then be used by screen readers.

The offscreen model is a database representation of the screen based on intercepting graphics commands. (Large preview)

This early approach was difficult: intercepting low-level graphics commands is complex and operating system dependent, and relying on heuristics to interpret these commands is error-prone.

The Semantic Web And Accessibility APIs

A new approach to screen readers arose in the late ’90s, based on the idea of the semantic web. Berners-Lee wrote of his dream for a semantic web in his 1999 book Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web:

I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web — the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.

Berners-Lee defined the semantic web as “a web of data that can be processed directly and indirectly by machines.” It’s debatable how much this dream has been realized, and many now think of it as unrealistic. However, we can see the way assistive technologies for the web work today as a part of this dream that did pan out.

Berners-Lee emphasized accessibility for the web from the start when founding the W3C, the web’s international standards group, in 1994. In a 1996 newsletter to the W3C’s Web Accessibility Initiative, he wrote:

The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. It presents new challenges and new hopes to people with disabilities.

HTML4, developed in the late ’90s and released in 1998, emphasized separating document structure and meaning from presentational or stylistic concerns. This was based on semantic web principles, and partly motivated by improving support for accessibility. The HTML5 that we currently use builds on these ideas, and so supporting assistive technology is central to its design.

So, how exactly do browsers and HTML support screen readers today?

Many front-end developers are unaware that the browser parses the DOM to create a data structure, especially for assistive technologies. This is a tree structure known as the accessibility tree that forms the API for screen readers, meaning that we no longer rely on intercepting the rendering process as the offscreen model approach did. HTML yields one representation that the browser can use both to render on a screen, and also give to audio or braille devices.

HTML yields a DOM tree, which can be used to render a view, and to build up an accessibility tree that assistive tech like screen readers use.

Browsers use the DOM to render a view, and to create an accessibility tree for screen readers. (Large preview)

Let’s look at the accessibility API in a little more detail to see how it handles the challenges we considered above. Nodes of the accessibility tree, called “accessible objects,” correspond to a subset of DOM nodes and have attributes including role (such as button), name (such as the text on the button), and state (such as focused) inferred from the HTML markup. Screen readers then use this representation of the page.

This is how a screen reader user can know an element is a button without making use of the visual style cues that a sighted user depends on. How could a screen reader user find relevant information on a page without having to read through all of it? In a recent survey, screen reader users reported that the most common way they locate the information they are looking for on a page is via the page’s headings. If an element is marked up with an h1h6 tag, a node in the accessibility tree is created with the role heading. Screen readers have a “skip to next heading” functionality, thereby allowing a page to be skimmed.

Some HTML attributes are specifically for the accessibility tree. ARIA (Accessible Rich Internet Applications) attributes can be added to HTML tags to specify the corresponding node’s name or role. For instance, imagine our button above had an icon rather than text. Adding aria-label="sign up" to the button element would ensure that the button had a label for screen readers to represent to their users. Similarly, we can add alt attributes to image tags, thereby supplying a name to the corresponding accessible node and providing alternative text that lets screen reader users know what’s on the page.

The downside of the semantic approach is that it requires developers to use HTML tags and aria attributes in a way that matches their code’s intent. This, in turn, requires awareness among developers, and prioritization of accessibility by their teams. Lack of awareness and prioritization, rather than any technical limitation, is currently the main barrier to an accessible web.

So the current approach to assistive tech for the web is based on semantic web principles and baked into the design of browsers and HTML. Developers and their teams have to be aware of the accessibility features built into HTML to be able to take advantage of them.

Recommended reading: Accessibility APIs: A Key To Web Accessibility by Léonie Watson

AI Connections

Machine Learning (ML) and Artificial Intelligence (AI) come to mind when we read Berners-Lee’s remarks about the dream of the semantic web today. When we think of computers being intelligent agents analyzing data, we might think of this as being done via machine learning approaches. The early offscreen model approach we looked at used heuristics to classify visual information. This also feels reminiscent of machine learning approaches, except that in machine learning, heuristics to classify inputs are based on an automated analysis of previously seen inputs rather than hand-coded.

What if in the early days of figuring out how to make the web accessible we had been thinking of using machine learning? Could such technologies be useful now?

Machine learning has been used in some recent assistive technologies. Microsoft’s SeeingAI and Google’s Lookout use machine learning to classify and narrate objects seen through a smartphone camera. CTRL Labs is working on a technology that detects micro-muscle movements interpreted with machine learning techniques. In this way, it seemingly reads your mind about movement intentions and could have applications for helping with some motor impairments. AI can also be used for character recognition to read out text, and even translate sign language to text. Recent Android advances using machine learning let users augment and amplify sounds around them, and to automatically live transcribe speech.

AI can also be used to help improve the data that makes its way to the accessibility tree. Facebook introduced automatically generated alternative text to provide user images with screen reader descriptions. The results are imperfect, but point in an interesting direction. Taking this one step further, Google recently announced that Chrome will soon be able to supply automatically generated alternative text for images that the browser serves up.

What’s Next

Until (or unless) machine learning approaches become more mature, an accessible web depends on the API based on the accessibility tree. This is a robust solution, but taking advantage of the assistive tech built into browsers requires people building sites to be aware of them. Lack of awareness, rather than any technical difficulty, is currently the main challenge for web accessibility.

Key Takeaways

  • Designing for different sets of abilities can give us new insights and lead to innovations that are broadly useful.
  • The web was intended to be accessible from the start, and the history of the web is intertwined with the history of assistive tech for the web.
  • Assistive tech for the web is baked into the current design of browsers and HTML.
  • Designing assistive tech, particularly involving AI, is continuing to provide new insights and lead to innovations.
  • The main current challenge for an accessible web is awareness among developers, designers, and product managers.

Resources

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Web Accessibility In Context

Collective #522

dreamt up by webguru in Uncategorized | Comments Off on Collective #522




















C522_slidertype

Embla

An extensible low level carousel for the web, written in TypeScript.

Check it out



C522_px

Px

Px is a tiny 2D canvas framework for turn-based puzzle games.

Check it out



Collective #522 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #522

Image Optimization In WordPress

dreamt up by webguru in Uncategorized | Comments Off on Image Optimization In WordPress

Image Optimization In WordPress

Image Optimization In WordPress

Adelina Țucă



A slow website is everyone’s concern. Not only that it chases visitors away but it also affects your SEO. So trying to keep it ‘in shape’ is definitely one of the main items to tick when you run a business or even a personal site.

There are many ways to speed up your WordPress site, each one complementing the other. There is not a universal method. Improving your site speed is actually the sum of more of these methods combined. One of them is image optimization, which we will tackle extensively in this post.

So read further to learn how to manually and automatically optimize all the images on your WordPress site. This is a step-by-step guide on image optimization that will make your site lightweight and faster.

The Importance Of Image Optimization

According to Snipcart, the three main reasons why images are affecting your WordPress site are:

  • They are too large. Use smaller sizes. I will talk about this later in the article.
  • They are too many, which demands as many HTTP requests. Using a CDN will help.
  • They contribute to a synchronous loading of elements, together with HTML, CSS, and JavaScript. This ends up with an increase of the render time. Displaying your images progressively (via lazy loading) will stop your images from loading at the same time with the other elements, which will make the page load quicker.

So yes, optimizing your images is an essential practice to make your site lighter. But first, you need to discover what makes your site load slowly. This is where speed tests intervene.

How To Test Your WordPress Site Speed

There are many tools that test your website’s speed. The simplest method is Pingdom.

Pingdom is a popular tool used by both casual users and developers. All you need to do is to open Pingdom and insert your WordPress site URL, choose the location that’s closest to the data center location of your hosting (based on your hosting’s servers), and start the test. If you have a CDN installed on your site, the location matters a great deal. But more on that later.

What’s nice about this tool is that, regardless of how simple its interface is, it displays advanced information about how a website performs, which is pure music to developers’ ears.

From these statistics, you will find out whether your site is doing well or it needs to be improved (or both). It’s nice because it gives you plenty of data and advice on pages, requests, and other sorts of issues and performance analysis.

pingdom website speed test

(Large preview)

pingdom website speed test for image optimization

(Large preview)

pingdom website speed test performance example

(Large preview)

On the same page, GTmetrix is another cool tool that’s similar to Pingdom and which will analyze your site’s speed and performance in depth.

gtmetrix website performance test

(Large preview)

Note: GTmetrix usually displays a rather slower WordPress website than Pingdom; this is how the tool calculates the metrics. There are no discrepancies, it’s just that GTmetrix measures the fully loaded time, unlike Pingdom which only counts the onload time.

The onload time calculates the speed after a page has been processed completely and all the files on that page have finished downloading. That’s why the onload time will always be faster than the fully loaded time.

The fully loaded time happens after the onload process when a page starts transferring data over again, which means that GTmetrix includes the onload times when it calculates the speed of a page. It basically measures the whole cycle of responses and transfers it gets from the page in question. Hence the slower times.

Google PageSpeed Insights is yet another popular tool for testing your site speed. Unlike the first two tools that only display your site performance on desktops, Google’s official testing tool is good at measuring the speed of your website’s mobile version, too.

Apart from that, Google will also give you its best recommendations on what needs to be improved on your site for getting faster loading times.

Usually, with either of these three tools, you can detect how heavily your images are affecting your site speed.

How To Speed Up Your WordPress Website

Of course, since this article is about image optimization, you guessed right that this is one of the methods. But before getting into the depths of the image optimization per se, let’s briefly talk about other ways that will help you if you have loads of images uploaded on your site.

Caching

Caching is the action of temporarily storing data in a cache so that, if a user accesses your site frequently, the data will be automatically delivered without going through the initial loading process again (which takes place when the site files are requested for the first time). A cache is a sort of memory that collects data that’s being requested many times from the same viewport and is used to increase the speed of serving this data.

Caching is in fact really simple. No matter if you do it manually or by installing a plugin, it can be implemented on your site pretty quickly. Some of the best plugins are WP Super Cache and W3 Total Cache and WP Super Cache — they are both free and best rated on WordPress.org official repository.

Content Delivery Networks

A CDN will request your site content from the nearest server location to your readers’ accessing point. It means that it keeps a copy of your site in many data centers located in different places around the world. Once a visitor accesses your site via their home location, the nearest server will request your content, which translates into faster loading times. Cloudflare and MaxCDN (now StackPath) are the most popular solutions for WordPress.

GZIP compression

Through this method, you can compress your site’s files by making them smaller. This will reduce your site bandwidth and will transfer the respective files to the browser faster.

Both WP Super Cache and W3 Total Cache come with GZIP compression feature that you can enable after installation. Also, many of the popular WordPress hosting providers have this feature already enabled via their standard packages. To find out if GZIP compression is enabled on your site, use this tool for a quick inspection.

wordpress site test gzip compression enabled

(Large preview)

You can also add GZIP compression to your WordPress site manually by modifying the .htaccess file.

Other simple and common (sometimes omitted, though) tricks would be to use a lightweight WordPress theme, deactivate unnecessary plugins (those that you don’t use anymore or those that you don’t need temporarily), and clean your WordPress database regularly.

Paying attention to these details also contributes to reducing the time that WordPress needs to build and display a page. Sometimes, a feature-heavy theme or plugin can be a major factor in slowing down your site. Caching plugins can intervene in this situation but keeping your WordPress site as light and clean as possible might be a better approach.

Image Optimization

This is a very efficient and simple technique that contributes to speeding up your WordPress site. And this is today’s topic, so let’s break it down into pieces.

Why Image Optimization?

Nowadays, websites are using more visuals than ever in their quest to catch the user’s attention. Multimedia (images, videos, podcasts) grew in popularity so much over the last five years, which led site owners to use pages that are increasingly graphical and image-heavy.

At this very moment, we are surrounded by billions of high-resolution images, videos, and graphics. They are the key to better-converting sites, hence to better marketing and business cards.

Sometimes, people tend to forget or simply do not pay enough attention to the fact that uploading images on a regular basis affects their WordPress site speed gradually.

Especially if you have an image-heavy site and your content relies mostly on images and visuals, this should be your main concern.

How To Optimize Your Website Images

This can be done either manually or with plugins. Let’s start with the manual method. (This is mostly for those of you who are very keen on having control over your site and doing everything on your own.) Optimizing images manually will also help you understand to a great extent how plugins (the automated method that we’ll talk about a bit later) work.

If you want to automate the process of image optimization, you have a backup. There are lots of great WordPress plugins out there that can save you a lot of time and also deliver great results. We’ll talk more about that and test a few tools later on.

Optimize Your Website Images Manually

Optimizing might mean a lot of stuff. Here, we can talk about compression, resizing, using the right formats, cropping and so on.

Use The Right Image Format

How can you tell what format is the best for your site images and which one has a higher resistance when it comes to editing and compression? The answer is, there is no general best format, but there are recommended formats based on the content that each image has.

PNG is mostly used for graphics, logos, illustrations, icons, design sketches or text because it can be easily edited in photo editors and still keep a great quality after compression. That’s because the PNGs are lossless — they don’t lose any significant data during compression.

JPG is more popular among photographers, casual users, or bloggers. It is lossy and can be compressed to smaller sizes while still preserving a good quality as seen with the naked eye. JPG is the format that supports millions of colors, that’s why people use it mostly for photos. It also supports high-compression levels.

An alternative to JPG and PNG could be WebP, a web image format introduced by Google, which has the role of providing even smaller sizes than JPG (or any other formats) while keeping the image quality similar to the latter. WebP format allows both lossy and lossless compression options. According to Google, a WebP image can get up to 34% smaller than a JPG image and up to 26% smaller than PNG images.

But WebP image format has its cons, such as not being supported by all browsers yet or by WordPress by default (you need tools for that). Learn more about how to integrate WebP images with WordPress.

The conclusion regarding the image formats is that there is no such thing as a universally right format. It really depends on the type of images that you need on your site. If you’re using photos with a large variety of colors, JPG might be the right format because it’s good at compressing color-heavy photos, which can be reduced to a considerable extent. It does not suit images with only a few color data like graphics or screenshots.

Let’s do a quick test. I saved a JPG image containing a multitude of colors, then converted it to PNG. After conversion, the photo became much bigger in size. Then, I used ImageResize.org tool to compress both images (I chose this tool because it allowed me to compress both formats and use files larger than 1MB).

This is the uncompressed image (via MyStock.photos):

testing jpg compression original image

(Large preview)

And these are the results:

results after testing JPG image compression

(Large preview)

On the other hand, PNG is the right format if you’re using many screenshots, graphics, logos, or transparent images — in general, images with very few colors or images that contain block colors (for instance, transitions between light and dark backgrounds). PNG is lossless and can often result in very small sizes after compression, which could otherwise be bigger as JPG. Saving this kind of images as JPG can make them low-quality and blurry.

Let’s do another test. I saved a screenshot of my WordPress dashboard both as JPG and PNG. Then I used the same ImageResize.org for compressing each format. It’s worth mentioning that the PNG file was saved in a significantly smaller size than the JPG to begin with.

This is the image:

testing png compression original image

(Large preview)

The results after compression:

results after testing PNG image compression

(Large preview)
Find Out The Maximum Display Size Of Your Images

If you’re into optimizing images by yourself, you should first find out what’s their maximum display size. Since your site is responsive, all the images you upload will be served at different resolutions based on the user viewport (the device they’re browsing your site from).

The maximum display size is the largest resolution the image can take counting all the potential viewports and screens that can access it.

How can you check the maximum display size of an image?

First, open a page or a post that contains the image you want to check. You need to resize your browser manually (make it gradually smaller by dragging its edges) to the point where the image jumps to the largest dimension. This point is called a “breakpoint” — because the image size suddenly breaks.

After the image jumped to the large dimension, press right click -> Inspect (if your browser is Chrome). Hover over the URL of the image in the top-right of the screen and you’ll see both the dimensions it is served at and its original dimensions (intrinsic). The latter is what the users will be downloading, while the former ones represent the maximum display size of the image on that page.

find maximum display size of an image Chrome

(Large preview)

With this information in mind, you can now resize and crop your image so it can correspond to the given dimensions. This way, you’ll make sure you will optimize it efficiently so it can still look great on the screen and, at the same time, not weigh much on your site.

For instance, if you want your images to be Retina-friendly, edit them using twice the maximum display size for better quality. The image in the screenshot has 428×321 pixels, so make it 856×642 pixels for a better Retina quality.

Resize And Crop Images

When you’re dealing with files that have dimensions a lot bigger than you normally need to showcase on your site, you can simply resize or crop them and only then upload them on your site. You will save disk space and keep your site lighter.

Of course, if you have a photography portfolio and it’s important for you that the visitors see your works in their original form, then yes, you have a real motive for presenting them at their best.

You can also crop your images anytime if there’s only one single detail that you want to show to the people and there’s no reason to upload the broad, full image if the rest of the content is redundant.

Compress Images

All the photo editors have this option where they ask you at what quality you want to save your edited image. You usually go with a quality of 100% (for obvious reasons), but you can lower it a bit to, say, 70-80%. You won’t notice a big difference if the image already has a huge resolution. Its size will be smaller in this case.

After you’ve set a lower quality percentage and saved the image, you can go deeper with another round of optimization of the same image by using an online tool to reduce its size even more.

JPEG Optimizer and JPEG.io have a reduction percentage of around 60%, while TinyPNG (if you choose to work with PNGs) of around 70%. Kraken is good for both formats, returning files of around 70% smaller via lossy compression.

Mac users can try ImageOptim, which compresses both JPG and PNG formats up to 50%.

Set Image Optimization On Autopilot

In order to automate the image optimization process on your site, you need tools (aka WordPress plugins). It means that they will do all the things that we talked about earlier but on autopilot.

There are several automatic WordPress solutions for image optimization but, in this post, I will only present to you those that come with solid features that can put on the table the full set for complete image optimization.

I’ll also show you the tests I conducted with each of the next three tools so you can observe everything in detail.

We are going to compare Optimole, ShortPixel, and Smush.

Optimole

optimole best image optimization wordpress plugins

(Large preview)

Optimole is probably the most complex of them all because it encapsulates all the features one might need for efficient image optimization. So if you’re looking for smart image optimization in its all aspects, then you might like Optimole.

Optimole transfers your images to a cloud where they are being optimized. Then, the optimized images are sifted through a CDN that makes them load fast. The plugin replaces each image’s URL with a custom one.

Adapting the images to each user’s screen size is another key feature of Optimole. It means that it automatically optimizes your images to the right dimension based on the user viewport, so if you’re seeing the image from a tablet, it will deliver the perfect size and quality for a tablet standard. These transformations are being made fast, without consuming any extra space on your site.

Another smart approach that you will enjoy about Optimole is its wit for detecting when a user has a slower connection. When it recognizes a slow connection, the plugin compresses the images on your site on a higher rate so that your visitors’ page loading time won’t be affected.

If you want lazy loading, the plugin also allows you to use it on your site. You just tick one box and Optimole does the work for you.

Another interesting thing about Optimole is that it won’t optimize all the images in your WordPress media library automatically. It only optimizes the images that people request by entering a page on your site. So don’t panic if you install the plugin and nothing happens. Once an image is requested by a user, the plugin will do what I already explained above. This is very space-saving since this plugin only uses one image, which it transforms in the cloud based on the users’ requests and devices.

What I love about this plugin is that it is smart and efficient and it’s never doing unnecessary work or conversions. We are using it on three of our sites: ThemeIsle, CodeinWP, and JustFreeThemes. You can check them out as demos.

ShortPixel

shortpixel best image optimization wordpress plugins

(Large preview)

ShortPixel is a popular WordPress plugin that’s great at optimizing your images in bulk. The plugin works on autopilot and optimizes by default every image that you upload to your media library. You can deactivate this option if you don’t need it, though.

The plugin offers lossy, glossy, and lossless compression, which you can apply even to thumbnails. All the modified images are saved in a separate folder on your site where you can always go back and forth to undo/redo an optimization. For example, convert back from lossless to lossy and vice-versa, or simply restore to the original files.

Moreover, if you go to the WordPress media library and select the list view instead of the grid view that comes by default, you will notice that the last column keeps you up to date regarding the compression status. This way, you can manually skim through all the images and compress/decompress those that you need. For every image, you will see by what percentage it was compressed.

If you want to optimize them all at once, just select Bulk Actions -> Optimize with ShortPixel (or any of its sub-items), and click Apply. Your images will be compressed in just a few moments.

Moreover, ShortPixel lets you convert PNG to JPG automatically, create WebP versions of your images, and optimize PDF files. It also provides CMYK to RGB conversion. It works with Cloudflare CDN service to upload the optimized images on a cloud server.

Smush

smush best image optimization wordpress plugins

(Large preview)

Another big name in the WordPress plugin space, Smush is a friendly tool that optimizes your images on the run. Smush comes with a beautiful tracking dashboard where it keeps you up to date on your website’s total savings, how many items were not optimized yet, how many were optimized already, and what methods it used for that.

It also has bulk compression, lazy loading, automatic PNG to JPG, and CDN integration. Same as ShortPixel, Smush also adds the compression status to each image in your media library, so you can either manage them individually or in bulk.

Smush uses lossless compression by default, focusing on keeping the images as close to their original version as possible. The downside of this plugin is that it doesn’t offer the same amount of features in the free version, like the aforementioned plugins do. You need to pay for some of the basic functionality.

Testing The Three Image Optimization Plugins

I took the next image of 904 KB from MyStock.photos and ran it through a series of tests with the three plugins I presented above.

image optimization wordpress plugins test - original image

(Large preview)

This is how the plugins performed:

  • Optimole: 555 KB (312 KB if you pick the High compression level)
  • ShortPixel: 197.87 KB
  • Smush: 894 KB

*Optimole and ShortPixel are using lossy compression, while Smush is using lossless compression.

The next approach is even more interesting.

I uploaded this very image on my WordPress site and used it in a blog post afterwards. Both Optimole and ShortPixel automatically reduced its size to make it match my screen resolution and post layout. So instead of using an almost 1 MB image, shrunk to fit the post, I am now using the same image reduced to its maximum display size. The plugins found the right size and dimensions needed in my blog post and modified the image accordingly.

And these are the reduced dimensions per each plugin:

  • Optimole: 158 KB, 524×394 pixels
  • ShortPixel: 71.7 KB, 768×577 pixels
  • Smush: didn’t optimize for this specific request

That being said, it’s important to note down two things:

  1. ShortPixel returned the best compression size but in larger dimensions; overall, a good result only it lost a bit of the image’s original quality.
  2. shortpixel image optimization test result desktop

    (Large preview)
  3. Optimole (which I set on Auto compression this time) returned a higher size but with smaller dimensions and with better quality as seen with the naked eye. Optimole somehow found a great mix between size and dimensions so that the quality won’t lose much ground here.

optimole image optimization test result desktop

(Large preview)

This is how it’s supposed to look on the live website:

wordpress blog post demo image optimization display

(Large preview)

If you ask me, Optimole adapted better to this specific request and the user’s viewport (in this case, my laptop screen).

Now, let’s have a quick peek at how these plugins adapt to mobile screens:

I followed the same routine. I activated one plugin at once and requested the same website page via my mobile (Android). The results:


optimole image optimization test result mobile
Optimole: 96 KB, 389×292 pixels.

shortpixel image optimization test result mobile
ShortPixel: 19 KB, 300×225 pixels.

Smush: didn’t optimize the image for mobile.

My mobile demo screen:

wordpress blog post demo image optimization mobile display

(Large preview)

As it happened in the first example, Optimole returned a bigger, more quality-focused version, while ShortPixel converted the image to a better size but with a slight loss in quality.

I initially used an image of 6 MB for the main desktop test but, since Smush doesn’t allow files larger than 1 MB in its free version (the others do allow), I had to pick an image under 1 MB.

I’ll do this test anyway with Optimole and ShortPixel only.

So, let’s do the fourth test, this time on a larger image.

The original image has 6.23 MB.

image optimization wordpress plugins test - original image

(Large preview)

The optimized sizes are:

  • Optimole: 798 KB with Auto compression level, 480 KB with High compression level
  • ShortPixel: 400.58 KB

There’s also EWWW Image Optimizer plugin that, similarly to Smush, only uses lossless compression and only reduces images by a fairly small percentage.

The conclusions after the four tests:

  • ShortPixel is delivering the best optimization rates (around 70-80%), while Optimole is somewhere at 40% on Auto compression level and at 70% on High compression level.
  • Optimole is adapting the content better to the users’ viewport and internet connection. If you set it on Auto, it will know how to both reduce size consistently and preserve a great quality. I like that it knows how to juggle with all these variables so it both helps to improve your site’s loading time and display high-quality images to your visitors.

If I had to put together not only the results of the tests but also the other features of the plugins (aka simplicity and user-friendliness), I would go with Optimole. But ShortPixel is also a great contender that I warmly recommend. Smush is a decent option too if you are willing to pay for it or you are a photographer that wants to keep their images as little processed as possible.

Wrapping Up

Do not underestimate the impact of image optimization. Images are always one of the main reasons for a slow website. Google doesn’t like slow websites and nor do your visitors and clients. Especially if you’re seeking monetization via your WordPress site. An unoptimized site will influence your SEO, drag you down in SERPs, increase your bounce rates, and will lose you money.

No matter if you prefer doing the image optimization manually or choosing a plugin to automate it for you, you will see the good results sooner rather than later.

What other methods are you using to hold the images in check on your WordPress site?

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Image Optimization In WordPress

1 2 3 4 5 6 7 8 9 10 ... 60 61   Next Page »