Collective #550

Inspirational Website of the Week: Déplacé Maison A refreshing design with lots of character and perfect details. Our pick this week. Check it out Where to put buttons on forms Button placement can make or break a form. Find out the Read more

Collective #527

dreamt up by webguru in Uncategorized | Comments Off on Collective #527


C527_extension

Exthouse

Exthouse is a tool powered by Lighthouse that provides a report about a web extension’s impact on web performance.

Check it out


Divi

Our Sponsor

Divi: The Powerful Visual Page Builder

Divi is a revolutionary WordPress theme and visual page builder for WordPress. With Divi, you can build your website visually. Add, arrange and design content and watch everything happen instantly right before your eyes.

Try it


C527_html

Writing HTML in HTML

John Ankarström explains why he rewrote his website in pure HTML, without using a static site generator.

Read it


C527_gif

Freezeframe.js

This library pauses animated GIFs and enables them to animate on hover, click, touch or using a trigger function. A former jQuery plugin that is now built with modern JavaScript.

Check it out





C527_flappy

Flappy Bird

Charlie Gerard created this fun experiment where you can play the Flappy Bird game with browser windows. Remove the pop-up block to make it work.

Check it out



C527_photo

Photo Creator 2.0

With the new Photo Creator of Icons8 you can create diverse photos with thousands of masked models, AI-powered face swap, and background removal.

Check it out



C527_shape

Shape (Beta)

Shape is an icon and illustration tool where you can export creations as React components. By Meng To.

Check it out









Collective #527 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #527

What I Learned From Designing AR Apps

dreamt up by webguru in Uncategorized | Comments Off on What I Learned From Designing AR Apps

What I Learned From Designing AR Apps

What I Learned From Designing AR Apps

Gleb Kuznetsov



The digital and technological landscape is constantly changing — new products and technologies are popping up every day. Designers have to keep track of what is trending and where creative opportunities are. A great designer has the vision to analyze new technology, identify its potential, and use it to design better products or services.

Among the various technologies that we have today, there’s one that gets a lot of attention: Augmented Reality. Companies like Apple and Google realize the potential of AR and invest significant amounts of resources into this technology. But when it comes to creating an AR experience, many designers find themselves in unfamiliar territory. Does AR require a different kind of UX and design process?

As for me, I’m a big fan of learning-by-doing, and I was lucky enough to work on the Airbus mobile app as well as the Rokid AR glasses OS product design. I’ve established a few practical rules that will help designers to get started creating compelling AR experiences. The rules work both for mobile augmented reality (MAR) and AR glasses experiences.

Rokid Glasses motion design exploration by Gleb Kuznetsov

Glossary

Let’s quickly define the key terms that we will use in the article:

  • Mobile Augmented Reality (MAR) is delivering augmented reality experienced on mobile devices (smartphones and tablets);
  • AR Glasses are a wearable smart display with a see-through viewing an augmented reality experience.

1. Get Buy-In From Stakeholders

Similar to any other project you work for, it is vital that you get support from stakeholders as early in the process as is possible. Despite being buzzed about for years, many stakeholders have never used AR products. As a result, they can question the technology just because they don’t understand the value it delivers. Our objective is to get an agreement from them.

“Why do we want to use AR? What problem does it solve?” are questions that stakeholders ask when they evaluate the design. It’s vital to connect your design decisions to the goals and objectives of the business. Before reaching stakeholders, you need to evaluate your product for AR potential. Here are three areas where AR can bring a lot of value:

  • Business Goals
    Understand the business goals you’re trying to solve for using AR. Stakeholders always appreciate connecting design solutions to the goals of the business. A lot of time business will respond to quantifiable numbers. Thus, be ready to provide an explanation off how your design is intended to help the company make more money or save more money.
  • Helpfulness For Users
    AR will provide a better user experience and make the user journey a lot easier. Stakeholders appreciate technologies that improve the main use of the app. Think about the specific value that AR brings to users.
  • Creativity
    AR is excellent when it comes to creating a more memorable experience and improving the design language of a product. Businesses often have a specific image they are trying to portrait, and product design has to reflect this.

Only when you have a clear answer to the question “Why is this better with AR?”, you will need to share your thoughts with stakeholders. Invest your time in preparing a presentation. Seeing is believing, and you’ll have better chances of buy-in from management when you show a demo for them. The demo should make it clear what are you proposing.

2. Discovery And Ideation

Explore And Use Solutions From Other Fields

No matter what product you design, you have to spend enough time researching the subject. When it comes to designing for AR, look for innovations and successful examples with similar solutions from other industries. For example, when my team was designing audio output for AR glasses, we learned a lot from headphones and speakers on mobile phones.

Design User Journey Using “As A User I Want” Technique

One of the fundamental things you should remember when designing AR experiences is that AR exists outside of the phone or glasses. AR technology is just a medium that people use to receive information. The tasks that users want to accomplish using this technology are what is really important.

“How to define a key feature set and be sure it will be valuable for our users?” is a crucial question you need to answer before designing your product. Since the core idea of user-centered design is to keep the user in the center, your design must be based on the understanding of users, their goals and contexts of use. In other words, we need to embrace the user journey.

When I work on a new project, I use a simple technique “As a [type of user], I want [goal] because [reason].” I put myself in the user’s shoes and think about what will be valuable for them. This technique is handy during brainstorming sessions. Used together with storyboarding, it allows you to explore various scenarios of interaction.

In the article “Designing Tomorrow Today: the Airbus iflyA380 App,” I’ve described in detail the process that my team followed when we created the app. The critical element of the design process was getting into the passenger’s mind, looking for insights into what the best user experience would be before, during and after their flight.

To understand what travelers like and dislike about the travel experience, we held a lot of brainstorming sessions together with Airbus. Those sessions revealed a lot of valuable insights. For example, we found that visiting the cabin (from home) before flying on the A380 was one of the common things users want to do. The app uses augmented reality so people can explore the cabin and virtually visit the upper deck, the cockpit, the lounges — wherever they want to go — even before boarding the plane.

IFLY A380 iOS app design by Gleb Kuznetsov
IFLY A380 iOS app design by Gleb Kuznetsov. (Large preview)

App also accompanies passengers from the beginning to the end of their journey — basically, everything a traveler wants to do with the trip is wrapped up in a single app. Finding your seat is one of the features we implemented. This feature uses AR to show your seat in a plane. As a frequent traveler, I love that feature; you don’t need to search for the place at the time when you enter the cabin, you can do it beforehand — from the comfort of your couch. Users can access this feature right from the boarding pass — by tapping on ‘glass’ icon.

IFLY A380 app users can access the AR feature by tapping on the ‘glass’ icon

IFLY A380 app users can access the AR feature by tapping on the ‘glass’ icon. (Large preview)

Narrow Down Use Cases

It might be tempting to use AR to solve a few different problems for users. But in many cases, it’s better to resist this temptation. Why? Because by adding too many features in your product, you make it not only more complex but also more expensive. This rule is even more critical for AR experience that generally requires more effort. It’s always better to start with simple but well-designed AR experience rather than multiple complex but loose designed AR experiences.

Here are two simple rules to follow:

  • Prioritize the problems and focus on the critical ones.
  • Use storyboarding to understand exactly how users will interact with your app.
  • Remember to be realistic. Being realistic means that you need to strike a balance between creativity and technical capabilities.

Use Prototypes To Assess Ideas

When we design traditional apps, we often use static sketches to assess ideas. But this approach won’t work for AR apps.


Understanding whether a particular idea is good or bad cannot be captured from a static sketch; quite often the ideas that look great on paper don’t work in a real-life context.

Thus, we need to interact with a prototype to get this understanding. That’s why it’s essential to get to prototyping state as soon as possible.

It’s important to mention that when I say ‘prototyping state’ I don’t mean a state when you create a polished high-fidelity prototype of your product that looks and work as a real product. What I mean is using a technique of rapid prototyping and building a prototype that helps you experience the interaction. You need to make prototypes really fast — remember that the goal of rapid prototyping is in evaluating your ideas, not in demonstrating your skills as a visual designer.

3. Design

Similar to any other product you design, when you work on AR product, your ultimate goal is to create intuitive, engaging, and clean interface. But it can be challenging since the interface in AR apps accounts both for input and output.

Physical Environment

AR is inherently an environmental medium. That’s why the first step in designing AR experience is defining where the user will be using your app. It’s vital to select the environment up front. And when I say ‘environment’, I mean a physical environment where the user will experience the app — it could be indoors or outdoors.

Here are three crucial moments that you should consider:

  1. How much space users need to experience AR? Users should have a clear understanding of the amount of space they’ll need for your app. Help users understand the ideal conditions for using the app before they start the experience.
  2. Anticipate that people will use your app in environments that aren’t optimal for AR. Most physical environments can have limitations. For example, your app is AR table tennis game but your users might not have a large horizontal surface. In this case, you might want to use a virtual table generated based on your device orientation.
  3. Light estimation is essential. Your app should analyze the environment automatically and provide contextual guidance if the environment is not good enough. If the environment is too dark or too bright for your app, tell the user that they should find a better place to use your app. ARCore and ARKit have a built-in system for light estimation.

When my team designed Airbus i380 mobile AR experience, we took the available physical space into account. Also, we’ve considered the other aspects of interaction, such as the speed at which the user should make decisions. For instance, the user who wants to find her seat during the boarding won’t have too much time.

We sketched the environment (in our case, it was a plane inside and outside) and put AR objects in our sketch. By making our ideas tangible, we got an understanding of how the user will want to interact with our app and how our app will adapt to the constraints of the environment.

AR Realism And AR Objects Aesthetics

After you define the environment and required properties, you will need to design AR objects. One of the goals behind creating AR experience is to blend virtual with real. The objects you design should fit into the environment — people should believe that AR objects are real. That’s why it’s important to render digital content in context with the highest levels of realism.

Here are a few rules to follow:

  • Focus on the level of details and design 3D assets with lifelike textures. I recommend using multi-layer texture model such as PBR (Physically Based Rendering model). Most AR development tools support it, and this is the most cost-effective solution to achieve an advanced degree of detail for your AR objects.
  • Get the lighting right. Lighting is a very crucial factor for creating realism — the wrong light instantly breaks the immersion. Use dynamic lighting, reflect environmental lighting conditions on virtual objects, cast object shadows, and reflections on real-world surfaces to create more realistic objects. Also, your app should react to real-world changing of lighting.
  • Minimize the size of textures. Mobile devices are generally less powerful than desktops. Thus, to let your scene load faster, don’t make textures too large. Strive to use 2k resolution at most.
  • Add visual noise to AR textures. Flat-colored surfaces will look fake to the user’s eye. Textures will appear more lifelike when you introduce rips, pattern disruptions, and other forms of visual noise.
  • Prevent flickering. Update the scene 60 times per second to prevent flickering of AR objects.

Design For Safety And Comfort

AR usually accompanied by the word ‘immersive.’ Creating immersive experience is a great goal, but AR immersion can be dangerous — people can be so immersed in smartphones/glasses, so they forget what is happening around them, and this can cause problems. Users might not notice hazards around them and bump into objects. This phenomenon is known as cognitive tunneling. And it caused a lot of physical traumas.

  • Avoid users from doing anything uncomfortable — for example, physically demanding actions or rapid/expansive motion.
  • Keep the user safe. Avoid situations when users have to walk backward.
  • Avoid long play AR sessions. Users can get fatigued using AR for extended periods. Design stop points and in-app notifications that they should take a break. For instance, if you design an AR game, let users pause or save their progress.

Placement For Virtual Objects

There are two ways of placing virtual objects — on the screen or in the world. Depending on the needs of your project and device capabilities, you can follow either the first or second approach. Generally, virtual elements should be placed in world space if they suppose to act like real objects (e.g., a virtual statue in AR space), and should be placed as an on-screen overlay if they intended to be UI controls or information messages (e.g., notification).

Rokid Glasses

Rokid Glasses. (Large preview)

‘Should every object in AR space be 3D?’ is a common question among designers who work on AR experiences. The answer is no. Not everything in the AR space should be 3D. In fact, in some cases like in-app notifications, it’s preferable to use flat 2D objects because they will be less visually distracting.

Rokid Glasses motion design exploration by Gleb Kuznetsov
Rokid Glasses motion design exploration by Gleb Kuznetsov. (Large preview)

Avoid Using Haptic Feedback

Phone vibrations are frequently used to send feedback in mobile apps. But using the same approach in AR can cause a lot of problems — haptic feedback introduces extra noise and makes the experience less enjoyable (especially for AR Glasses users). In most cases, it’s better to use sound effect for feedback.

Make A Clear Transition Into AR

Both for MAR and AR glass experiences, you should let users know they’re about to transition into AR. Design a transition state. For the ifly380 app, we used an animated transition — a simple animated effect that user sees when taps on the AR mode icon.

Trim all the fat.

Devote as much of the screen as possible to viewing the physical world and your app’s virtual objects:

  • Reduce the total number of interactable elements on the screen available for the user at one moment of time.
  • Avoid placing visible UI controls and text messages in your viewport unless they are necessary for the interaction. A visually clean UI lends itself seamlessly to the immersive experience you’re building.
  • Prevent distractions. Limit the number of times when objects appear on the user screen out of the blue. Anything that appears out of the blue instantly kills realism and make the user focus on the object.

AR Object Manipulation And Delineating Boundaries Between The ‘Augment’ And The ‘Reality’

When it comes to designing a mechanism of interaction with virtual objects, favor direct manipulation for virtual objects — the user should be able to touch an object on the screen and interact with it using standard, familiar gestures, rather than interact with separate visible UI controls.

Also, users should have a clear understanding of what elements they can interact with and what elements are static. Make it easy for users to spot interactive objects and then interact with them by providing visual signifiers for interactive objects. Use glowing outlines or other visual highlights to let users know what’s interactive.

Scan object effect for outdoor MAR by Gleb Kuznetsov
Scan object effect for outdoor MAR by Gleb Kuznetsov. (Large preview)

When the user interacts with an object, you need to communicate that the object is selected visually. Design a selection state — use either highlight the entire object or space underneath it to give the user a clear indication that it’s selected.

Last but not least, follows the rules of physics for objects. Just like real objects, AR objects should react to the real-world environment.

Design For Freedom Of Camera

AR invites movement and motion from the user. One of the significant challenges when designing or AR is giving users the ability to control the camera. When you give users the ability to control the view, they will swing device around in an attempt to find the points of interest. And not all apps are designed to help the user to control the viewfinder.

Google identifies four different ways that a user can move in AR space:

  1. Seated, with hands fixed.
  2. Seated, with hands moving.
  3. Standing still, with hands fixed.
  4. Moving around in a real-world space.

The first three ways are common for mobile AR while the last one is common for AR glasses.

In some cases, MAR users want to rotate the device for ease of use. Don’t interrupt the camera with rotation animation.

Consider Accessibility When Designing AR

As with any other product we design, our goal is to make augmented reality technology accessible for people. Here are a few general recommendations on how to address real-world accessibility issues:

  • Blind users. Visual information is not accessible to blind users. To make AR accessible for blind users, you might want to use audio or haptic feedback to deliver navigation instructions and other important information.
  • Deaf or hard-hearing users. For AR experience that requires voice interaction, you can use visual signals as an input method (also known as speechreading). The app can learn to analyze lip movement and translate this data in commands.

If you’re interested in learning more practical tips on how to create accessible AR apps, consider watching the video talk by Leah Findlater:

Encourage Users To Move

If your experience demands exploration, remind users they can move around. Many users have never experienced a 360-degree virtual environment before, and you need to motivate them to change the position of their device. You can use an interactive object to do that. For example, during I/0 2018, Google used an animated fox for Google Maps that guided users to the target destination.

This AR experience uses an animated bird to guide users

This AR experience uses an animated bird to guide users. (Large preview)

Remember That Animation Is A Designer’s Best Friend

Animation can be multipurpose. First, you can use a combination of visual cues and animation to teach users. For example, the animation of a moving phone around will make it clear what users have to do to initialize the app.

Second, you can use animation to create emotions.


One second of emotion can change the whole reality for people engaging with a product.

Well-designed animated effects help to create a connection between the user and the product — they make the object feel tangible. Even a simple object such as loading indicator can build a bridge of trust between users and the device.

Rokid Alien motion design by Gleb Kuznetsov
Rokid Alien motion design by Gleb Kuznetsov. (Large preview)

A critical moment about animation — after discovering the elements of design and finding design solutions for the animation base, it’s essential to spend enough time on creating a proper animated effect. It took lots of iterations to finish a loading animation that you see above. You have to test every animation to be sure it works for your design and be ready to adjust color, positioning, etc. to give the best effect.

Prototype On The Actual Device

In the interview for Rokid team, Jeshua Nanthakumar mentioned that the most effective AR prototypes are always physical. That’s because when you prototype on the actual device, from the beginning, you make design work well on hardware and software that people actually use. When it comes to unique displays like on the Rokid Glasses, this methodology is especially important. By doing that you’ll ensure your design is implementable.

Motion design language exploration for AR Glasses Rokid by Gleb Kuznetsov
Motion design language exploration for AR Glasses Rokid by Gleb Kuznetsov. (Large preview)

My team was responsible for designing the AR motion design language and loading animation for AR glasses. We decided to use a 3D sphere that will be rotated during the loading and will have nice reflections on its edges. The design of the animated effect took two weeks of hard work of motion designers and it looked gorgeous on high-res monitors of our design team, but the final result wasn’t good enough because the animation caused motion sickness.

Motion sickness often caused by the discrepancies between the motion perceived from the screen of AR Glasses and the actual movement of the user’s head. But in our case, the root cause of the motion sickness was different — since we put a lot of attention in polishing details like shapes, reflections, etc. unintentionally we made users focus on those details while the sphere was moving.

As a result, the motion happened in the periphery, and since humans are more sensitive to the moving objects in the periphery this caused motion sickness. We solved this problem by simplifying the animation. But it’s critical to mention that we won’t be able to find this problem without testing on the actual device.

If we compare the actual procedure of testing of AR apps with traditional GUI apps, it will be evident that testing AR apps require more manual interactions. A person who conducts testing should determine whether the app provides the correct output based on the current context.

Here are a few tips that I have for conducting efficient usability testing sessions:

  • Prepare a physical environment to test in. Try to create real-world conditions for your app — test it with various physical objects, in different scenes with different lighting. But the environment might not be limited to scene and lighting.
  • Don’t try to test everything all at once. Use a technique of chunking. Breaking down a complex flow into smaller pieces and testing them separately is always beneficial.
  • Always record your testing session. Record everything that you see in the AR glass. A session record will be beneficial during discussions with your team.
  • Testing for motion sickness.
  • Share your testing results with developers. Try to mitigate the gap between design and development. Make sure your engineering team knows what problem you face.

Conclusion

Similar to any other new technology, AR comes with many unknowns. Designers who work on AR projects have a role of explorers — they experiment and try various approaches in order to find the one that works best for their product and delivers the value for people who will use it.

Personally, I believe that it’s always great to explore new mediums and find new original ways of solving old problems. We are still at the beginning stages of the new technological revolution — the exciting time when technologies like AR will be an expected part of our daily routines — and it’s our opportunity to create a solid foundation for the future generation of designers.

Smashing Editorial
(cc, yk, il)

Source: Smashing Magazine, What I Learned From Designing AR Apps

Design Your Mobile Emails To Increase On-Site Conversion

dreamt up by webguru in Uncategorized | Comments Off on Design Your Mobile Emails To Increase On-Site Conversion

Design Your Mobile Emails To Increase On-Site Conversion

Design Your Mobile Emails To Increase On-Site Conversion

Suzanne Scacca



I find it interesting that Google has pushed so hard for web designers to shift from designing primarily for desktop to now designing primarily for mobile. Google’s right… but why only focus on designing websites to appeal to mobile users? After all, Gmail is a leader within the email client ranks, too.

Email can be an incredibly powerful driver of conversions for websites, according to a 2019 report from Barilliance.

On average, emails convert at a rate of 1.48%. That includes all sent emails though — which includes the ones that go unopened as well as the ones that bounce. If you look at emails that are opened by recipients, however, the average conversion rate jumps to 17.75%.

Let’s go even further with this:

Recent data from Litmus reveals that more emails are opened on mobile than on any other device:

Litmus email opens data

Litmus data reveals that 43% of email opens occur on mobile devices. (Source: Litmus) (Large preview)

Many sources even put the average mobile open rate at well over 50%. But whether it’s 43% or 50%+, it’s clear that mobile is most commonly the first device people reach for to check their emails.

Which brings us to the following conclusion:

If users are more likely to open email on mobile and we know that opened emails convert at a higher rate than those that go unopened, wouldn’t it make sense for designers to prioritize the mobile experience when designing emails?

Mobile Email Design Tips to Increase Conversions

Let’s explore what the latest research says about designing emails for mobile users and how that can be used to increase opens, clicks and, later, your website’s conversion rates (on mobile and desktop).

Design the Same Email for Mobile and Desktop

Although email is often ranked as the most effective marketing channel for acquiring and retaining customers, that’s not really an accurate picture of what’s going on.

According to Campaign Monitor, here’s what’s actually happening with mobile email subscribers:

Campaign Monitor mobile CTR data

Campaign Monitor charts the progression from mobile email opens to click-through rate. (Source: Campaign Monitor) (Large preview)

The open rates on mobile are somewhat on par with the Litmus data earlier.

However, it can take multiple opens before the email recipient actually clicks on a link or offer within an email. And guess what? About a third of them make their way over to desktop — where they convert at a higher rate than those that stay on mobile.

As the report states:

Data from nearly 6 million email marketing campaigns suggests the shift to mobile has made it more difficult to get readers to engage with your content, unless you can drive subsequent opens in a different environment.

I’ve reconstructed the graphic above and filled it with the number of people who would take action from an email list of 1,000 recipients:

Campaign Monitor mobile open and click data

An example of how Campaign Monitor’s data translates into real-world numbers. (Source: Campaign Monitor) (Large preview)

At first glance, it looks as though mobile is the clear winner — at least in terms of driving traffic to the website. After the initial mobile open, 32 subscribers go straight to the website. After a few more opens on mobile, 5 more head over there.

Without a breakdown of what the user journey looks like when opened on desktop, though, the calculation of additional clicks you’d get from that portion of the list isn’t so cut-and-dried.

However, let’s assume that Litmus’s estimate of 18% desktop opens is accurate and that Campaign Monitor’s 12.9% click-through rate holds true whether they open the email first on mobile or desktop. I think it’s safe to say that 23 desktop-only email opens can be added to the total.

So, that brings it to:

37 clicks on mobile vs 26 on desktop.

Bottom line: while mobile certainly gets more email subscribers over to a website, the conversion-friendliness of desktop cannot be ignored.

Which is why you don’t want to segment your lists based on device. If you want to maximize the number of conversions you get from a campaign, enable subscribers to seamlessly move from one device to the other as they decide what action to take with your emails.

In other words, design the same exact email for desktop and mobile. But assume that the majority of subscribers will open the email on their mobile device (unless historical data of your campaigns says otherwise). Then, use mobile-first design and marketing tips to create an email that’s suitable for all subscribers, regardless of device.

Factor in Dark Mode When Choosing Your Colors

You don’t want there to be anything that stands in your users’ way when it comes to moving from email to website. That’s why you have to consider how their choice of color and brightness for their mobile screen affects the readability or general appearance of your email design.

There are a number of ways in which this can become an issue.

As we hear more and more about how harmful blue light from our devices can be, it’s no surprise that Dark Mode options are beginning to roll out. While it’s prevalent on desktops right now, it’s mostly in beta for mobile. The same goes for email apps.

That said, smartphone users can hack their own “Dark Mode” of sorts. This type of color inversion can be enabled through the iPhone’s “Accessibility” settings.

Gadget Hacks iPhone 'Dark Mode' hack

Gadget Hacks shows how iPhone users can hack their own ‘Dark Mode’. (Source: Gadget Hacks) (Large preview)

Essentially, what this does is invert all of the colors on the screen from light to dark and vice versa.

Unfortunately, the screenshotting tool on my iPhone won’t allow me to capture the colors exactly as they appear. However, what I can show you is how the inversion tool alters the color of your email design.

This is an email I received from Amtrak last week. It’s pretty standard with its branded color palette and brightly colored notices and CTA buttons:

Amtrak email on Gmail mobile app

What a promotional email from Amtrak looks like on the Gmail mobile app. (Source: Amtrak) (Large preview)

Now, here is what that same email looks like when viewed through my iPhone’s “Smart Invert” setting:

Amtrak email with inverted colors

What a promotional email from Amtrak looks like in Gmail when colors are inverted. (Source: Amtrak) (Large preview)

The clean design of the original with the white font on the deep blue brand colors is gone. Now, there’s a harsh mix of colors and a hard-to-read Amtrak logo in its place.

You can imagine how this kind of inconsistent and disjointed color display would create an off-putting experience for mobile users.

But what do you expect them to do? If they’re struggling with the glare from their mobile device, Dark Mode (or some other brightness adjustment) will make it easier for them to open and read emails in the first place. Even if it means compromising the appearance of the email you so carefully designed.

One bright spot in all this is that the official “Dark Mode” being rolled out to iPhone (and, hopefully, other smartphones) soon won’t alter the look of HTML emails. Only plain-text messages will be affected.

However, it’s important to still be mindful of how the design choices you make may clash with a surrounding black background. Brightly colored backgrounds, in particular, are likely to clash with the surrounding black of your email app.

How do you solve this issue? Unfortunately, you can’t serve different versions of your email to users based on whether they’re viewing it in Dark Mode or otherwise. You’ll just have to rely on your own tests to confirm that potential views in Dark Mode don’t interfere with your design or message.

In addition to the standard testing you do, set your own smartphone up with Dark Mode (or its hack counterpart). Then, run your test email through the filter and see what happens to the colors. It won’t take long to determine what sort of colors you can and cannot design with for email.

Design the Subject Line

The subject line is the very first thing your email subscribers are going to see, whether it shows up as a push notification or they first encounter it in their inbox. What do you think affects their initial decision to click open an email rather than throw it in the Trash or Spam box immediately? Recognizing the Sender will help, but so will the attractiveness of the subject line.

As for how exactly you go about “designing” a subject line, there are a few things to think about. The first is the length.

Marketo conducted a study across 200+ email campaigns and 2 million emails sent to subscribers. Here is what the test revealed about subject line length:

Marketo subject line length test

Marketo tested over 2 million sent emails to determine the ideal subject line length. (Source: Marketo) (Large preview)

Although the 4-word subject line resulted in the highest open rate, it had a poor showing in terms of clicks. It was actually the 7-word subject line that seemed to have struck the perfect balance with subscribers, leading 15.2% of them to open the email and then 10.8% of them to click on it.

While you should still test this with your own email list, it appears that seven words is the ideal length for a subject line.

Next, you have to think about the buzzwords used in the subject line.

To start, keep this list of Yesware’s Spam Trigger Words out of it:

Yesware list of spam-trigger words

Yesware’s analysis and list of the top spam-trigger words. (Source: Yesware) (Large preview)

If you want to increase the chance the email will be opened, read, clicked on and eventually convert on-site, you have to be savvy about which words will appear within the subject line.

What I’d suggest you do is bookmark CoSchedule’s Email Subject Line Tester tool.

CoSchedule email subject line tester

CoSchedule tests and scores your email subject lines with one click. (Source: CoSchedule) (Large preview)

Here’s an example of how CoSchedule analyzes your subject lines and clues you in to what increases and decreases your open rates:

CoSchedule subject line score

The first part of CoSchedule’s subject line analysis and scoring tool. (Source: CoSchedule) (Large preview)

As you can see, CoSchedule tells you which kinds of words increase open rates as well as those that don’t. Do enough of these subject line tests and you should be able to compile a good set of wording guidelines for your writers and marketers.

Further down, you’ll get more insight into what makes for a strongly worded and designed subject line:

CoSchedule subject line recommendations

The second part of CoSchedule’s subject line assessment and recommendations. (Source: CoSchedule) (Large preview)

CoSchedule will provide recommendations on how to shorten or lengthen the character and word counts based on best practices and results.

Finally, at the very bottom of your subject line test you’ll see this:

CoSchedule email client preview

The final part of CoSchedule’s subject line tester includes an email client preview. (Source: CoSchedule) (Large preview)

This gives you (or, rather, your writer) a chance to see how the subject line will appear within the “design” of an email client. It’s not a problem if the words get cut off on mobile. It’s bound to happen. However, you still want everything that does appear to be appealing enough to click on.

Also, don’t forget about dressing up your subject lines with emoji.

When you think about it, emoji in mobile email subject lines make a lot of sense. Text messaging and social media are ripe with them. It’s only natural to use this fun and truncated form of language in email, too.

Campaign Monitor makes a good point about this:

If you replace words with recognizable emoji, you’ll create shorter subject lines for mobile users. Even if it doesn’t shorten your subject line enough to fit on a mobile screen, it’s still an awesome way to make it stand out from the rest of your subscribers’ cluttered inboxes.

The CoSchedule test will actually score you based on how (or if) you used emoji, too:

CoSchedule Emoji Evaluation

CoSchedule suggests that the use of emoji in subject lines will give you an edge. (Source: CoSchedule) (Large preview)

As you can see, CoSchedule considers this a competitive advantage in marketing.

Even just looking at my own email client, my eye is instantly drawn to the subject line from Sephora which contains a “NEW” emoji:

Sephora subject line with emoji

A Sephora email containing an emoji stands out from others in the inbox. (Source: Sephora) (Large preview)

Just be careful with which emoji you use. For starters, emoji are displayed differently from device to device, so it may not have the same effect on some of your subscribers if it’s a more obscure choice.

There’s also the localization aspect to think about. Not every emoji is perceived the same way around the globe. As Day Translations points out, the fire symbol is one that could cause confusion as some countries interpret it as a literal fire whereas some may view it as a symbol for attraction.

That said, emoji have proven to increase both open rates and read rates of emails. Find the right mobile-friendly emoji to include in your subject line and you could effectively increase the number of subscribers who visit your website as a result.

Wrap-Up

There are so many different kinds of emails that go out from websites:

  • Welcome message
  • Post-purchase transaction email
  • Abandoned cart reminder
  • Promotional news
  • Product featurette
  • New content available
  • Account /rewards points
  • And more.

In other words, there are plenty of ways to get in front of email subscribers.

Just remember that the majority of them will first open your email on mobile. And some will reopen it on mobile over and over again until they’re compelled to click on it or trash it. It’s up to you to design the email in a way that motivates them to visit your website and, consequently, convert.

Further Reading on SmashingMag:

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, Design Your Mobile Emails To Increase On-Site Conversion

Collective #526

dreamt up by webguru in Uncategorized | Comments Off on Collective #526


Divi

Our Sponsor

Divi: The Powerful Visual Page Builder

Divi is a revolutionary WordPress theme and visual page builder for WordPress. With Divi, you can build your website visually. Add, arrange and design content and watch everything happen instantly right before your eyes.

Try it





C526_servo

Servo

Servo is a modern, high-performance browser engine designed for both application and embedded use. Created by Mozilla Research.

Check it out



C526_readme

readme-md-generator

A CLI that generates beautiful README.md files by suggesting you default answers from your package.json and git configuration.

Check it out







C526_blink

Never-Blink

A project where you can randomly connect to a player around the world and challenge him/her for a game of no blinking.

Check it out





C526_font

Free Font: Basier Mono

This modern and neutral monospaced font is based on the Basier font family and comes with a free regular style.

Get it




C526_fullstack

FullStack

A React/ApolloGraphQL/Node/Mongo project boilerplate that Jason Werner open-sourced after his client decided not to pay him.

Check it out


Collective #526 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #526

JAMstack Fundamentals: What, What And How

dreamt up by webguru in Uncategorized | Comments Off on JAMstack Fundamentals: What, What And How

JAMstack Fundamentals: What, What And How

JAMstack Fundamentals: What, What And How

Vitaly Friedman



We love pushing the boundaries on the web, and so we’ve decided to try something new. You probably have heard of JAMstack — the new web stack based on JavaScript, APIs, and Markup — but what does it mean for your workflow and when does it make sense in your projects?

As a part of our Smashing Membership, we run Smashing TV, a series of live webinars, every week. No fluff — just practical, actionable webinars with a live Q&A, run by well-respected practitioners from the industry. Indeed, the Smashing TV schedule looks pretty dense already, and it’s free for Smashing Members, along with recordings — obviously.

We’ve kindly asked Phil Hawksworth to run a webinar explaining what JAMStack actually means and when it makes sense, as well as how it affects tooling and front-end architecture. The one hour-long webinar is now available as well. We couldn’t be happier to welcome Phil to co-MC our upcoming SmashingConf Toronto (June 25-26) and run JAMStack_conf London, which we co-organize on July 9-10 this year as well. So, let’s get into it!

Phil Hawksworth: Excellent, okay, well let’s get into it then. Just by way of a very quick hello, I mean I’ve said hello already, Scott’s given me a nice introduction. But yes, I currently work at Netlify, I work in the developer experience team there. We are hopefully going to have plenty of time for Q&A but, as Scott mentioned, if you don’t get a chance to ask questions there, or if you just rather, you can ping them directly at me on Twitter, so my Twitter handle is my names, it’s Phil Hawksworth, so any time you can certainly ask me questions about JAMstack or indeed anything on Twitter.

Phil Hawksworth: But I want to start today just by kind of going back in time a little bit to this quote which really resonates very, very strongly with me. This is a quote from the wonderful Aaron Schwartz who, of course, contributed so much to the Creative Commons and the open web and he wrote this on his blog way back in 2002, and he said, “I care about not having to maintain cranky AOL server, Postgres and Oracle installs.” AOL server, I had to look up to remind myself was an open source web server at the time.

Phil Hawksworth: But this chimes really strongly with me. I also don’t want to be maintaining infrastructure to keep a blog alive, and that’s what he was talking about. And it was in this blog post on his own blog and it was titled “Bake, Don’t Fry”. He was picking on a term that someone who’d built a CMS recently had started to use, and he kind of popularized this term about baking (Bake, Don’t Fry); what he’s talking about there is pre-rendering rather than rendering on demand, so baking the content ahead of time, rather than frying it on demand when people come and ask for it — getting things away from request time and into kind of build time.

Phil Hawksworth: And when we’re talking about pre-rendering and rendering, what we mean by that is we’re talking about generating markup. I feel a bit self-conscious sometimes talking about kind of server render or isomorphic rendering or lots of these kind of buzzwordy terms; I got called out a few years ago at a conference, Frontiers Conference in Amsterdam when I was talking about rendering on the server and someone said to me, “You mean generating HTML? Just something that outputs HTML?” And that’s, of course, what I mean.

Phil Hawksworth: But all of this kind of goes a long way towards simplifying the stack. When we think about the stack that we serve websites from; I’m all about trying to simplify things, I’m super keen on trying to simplify the stack. And that’s kind of at heart of this thing called “JAMstack” and I want to try and explain this a little bit. The “JAM” in JAMstack stands for JavaScript, APIs and Markup. But that’s not enough really to help us understand what it means — what on earth does that really mean?

Phil Hawksworth: Well, what I want to try and do in the next half hour or so, is I want to kind of expand on that definition and give more of a description of what JAMstack is. I want to talk a bit about the impact and the implications of JAMstack, and, you know, think about what that can give us as to why we might choose it. Along the way, I’m going to try to mention some of the tools and services that will be useful, and hopefully, I’ll wrap up with some resources that you might want to dig into and perhaps mention some first steps to get you under way.

Phil Hawksworth: So, that’s the plan for the next half-hour. But, I want to, kind of, come back to this notion about simplifying the stack, because, hopefully, people who join this or have come to watch this video later on, maybe you’ve got a notion of what JAMstack is, or maybe it’s a completely new term, and you’re just curious. But, ultimately, there are a lot of stacks out there, already. There are lots of ways that you can deliver a website. It feels like we’ve been building different types of infrastructure for a really long time, whether that’s a LAMP stack or the MAMP stack, or the — I don’t know — the MEAN stack. There’s a bunch of them floating by on the screen here. There are lots and lots of them.

Phil Hawksworth: So, why on earth would we need another one? Well, JAMstack is, as I mentioned, is JavaScript/API/Markup, but if we try and be a tiny bit more descriptive, JAMstack is intended to be a modern architecture, to help create fast and secure sites and dynamic apps with JavaScript/APIs and pre-rendered markup, served without web servers, and it’s this last point which is, kind of, something that sets it apart and maybe, makes it a little bit more, kind of, interesting and unusual.

Phil Hawksworth: This notion of serving something without web servers, that sounds either magical or ridiculous, and hopefully, we’ll figure out what along the way. But to try and shed some light over this and describe it in a little bit more detail, it’s sometimes useful to compare it to what we might think of as a traditional stack, or a traditional way of serving things on the web. So, let’s do that just for a second. Let’s just walkthrough, perhaps, what a request might look like as it gets serviced in a traditional stack.

Phil Hawksworth: So, in this scenario, we got someone opening up a web browser and making a request to see a page. And maybe that request hits a CDN, but probably, more likely, it hit some other infrastructure that we are hosting — as the people who own this site. Maybe we tried to make sure that this is going to scale under lots of load because we, obviously, want a very popular and successful sight. So, perhaps we got a load balancer, that has some logic in it, which will service that request to one of a number of web servers that we’ve provisioned and configured and deployed to. There might be a number of those servers.

Phil Hawksworth: And those servers will execute some logic to say, “Okay, here’s our template, we need to populate that with some data.” We might get our data from one of a number of database servers that will perform some logic to look up some data, return that to the web server, create a view that we then pass back through the load balancer. Perhaps, along the way, calling off to CDN, stashing some assets in the CDN, and I should clarify, a CDN is a Content Delivery Network, so a network of machines distributed around the Internet to try and get request service as close to possible to the user and add things, like caching.

Phil Hawksworth: So, we might stash some assets there, and ultimately, return a view into the browser, into the eyeballs of the user, who gets to then experience the site that we built. So, obviously, that’s, either, an oversimplification or a very general view of how we might service a request in a traditional stack. If we compare that to the JAMstack, which is servicing things in a slightly different way, this is how it might look.

Phil Hawksworth: So, again, same scenario, we’re starting in a web browser. We’re making a request for a view of the page, and that page is already in a CDN. It serves statically from there, so it’s returned to the user, into the browser, and we’re done. So, obviously, a very simplified view, but straight away, you can start to see the differences here in terms of complexity. In terms of places that we need to manage code, deeply code, all of those different things. So, for me, one of the core attributes one of a JAMstack, is that it means that you’re building a site that’s capable of being served directly from a CDN, or even from a static file server. CDN is something that we might want to put in place to handle load, but ultimately, this could be served directly from any kind of static file server, kind of static hosting infrastructure.

Phil Hawksworth: JAMstack, kind of, offers an opportunity to reduce complexity. Just comparing those two parts of the diagram that we’ll come back to a few times, over the course of the next half hour, you can see that it’s an opportunity to reduce complexity and reduce risk. And so, it means that we can start to enjoy some of the benefits of serving static assets, and I’m going to talk about what those are a little bit later on. But you might be looking at this and thinking, “Well, great, but isn’t this just the new name for static websites?” That’s a reasonable thing to level at me when I’m saying, “We’re going to serve things statically.”

Phil Hawksworth: But I want to come back to that. I want to talk about that a little bit more, but first of all, I want to, kind of, talk about this notion of stacks and what on earth is a stack, anyway? And I think of a stack as the layers of technology, which deliver your website or application. And we’re not talking about the build pipeline, or the development process, but certainly the way we serve sites can have a big impact on how we develop and how we deploy things, and so on.

Phil Hawksworth: But here, we’re talking about the technology stack, the layers of technology, that actually deliver your website and your application to the users. So, let’s do another little comparison. Let’s talk about the LAMP stack for a second.

Phil Hawksworth: The LAMP stack, you may remember, is made up of an apache web server, for doing things like the HCP routing and the serving of static assets. PHP, for some pre-processing, so pretty hyper-text processing; applying the logic, maybe building the views for the templates and what have you. And has some access to your data, by my NISQL, and then LINUX is the operating system that sits beneath all of that, keeps that all breathing. We can wrap that up together notionally as this web server. And we may have many of these servers, kind of, sitting together to serve a website.

Phil Hawksworth: That’s a, kind of, traditional look at the LAMP stack, and if we compare that to the JAMstack, well, here, there’s a critical change. Here, we’re actually moving up level, rather than thinking about the operating system and thinking about how we run the logic to deliver a website. Here we’re making an assumption that we’re going to be serving these things statically. So, we’re doing the ACP routing, and the serving of assets from a static server. That can be reasonably done. We got very good at this, over the years, building ways to deliver static websites, or static assets.

Phil Hawksworth: This might be a CDN, and again, we’ll talk about that in a moment. But the area of interest for us, is happening more in the browser. So, here, this is where our markup is delivered and is parsed. This is where JavaScript can execute, and this is happening in the browser. In many ways, the browser has become the runtime for the modern web. Rather than having the runtime in the server infrastructure, itself, now we’ve moved that up a level, closer to the user, and into the browser.

Phil Hawksworth: When it comes to accessing data, well, that’s happening through, possibly, external APIs, making calls via JavaScripts to these external APIs to get server access, or we can think APIs as the browser APIs, being able to interact with JavaScript with capabilities right there in your browser.

Phil Hawksworth: Either way, the key here about the JAMstack is that, we’re talking about things that are pre-rendered: they’re served statically and then, they maybe progressively enhanced in the browser to make use of browser APIs, JavaScripts, and what have you.

Phil Hawksworth: So, let’s just do this little side-by-side comparison here. Again, I just want to kind of reiterate that the JAMstack has moved up a level to the browser. And if we see the two sides of this diagram, with the LAMP stack on the left and effectively, the JAMstack on the right, you might even think that, well, even when we were building things with the LAMP stack, we’re still outputting mark-up. We’re still outputting JavaScript. We might still be accessing APIs. So, in many ways, the JAMstack is almost like a subset of what we were building before.

Phil Hawksworth: I used to sometimes talk about JAMstack as the assured stack, because it’s assures a set of tools and technologies that we need to deliver a site. But, either way, it’s a much simplified way of delivering a site that, kind of, does away with the need for things to execute and perform logic at the server at request time.

Phil Hawksworth: So, this can do a lot of things. This can, kind of, simplify deployments and again, I’m going to call back to this diagram from time-to-time. If we think about how we deploy our code and our site, for every deploy, from the very first one, through the whole development lifecycle, all the way through the life of the website. On the traditional stack, we might be having to change the logic and the content for every box on that diagram.

Phil Hawksworth: Whereas, in the JAMstack, when we’re talking about deploying, we’re talking at getting things to the CDN, getting things to the static server, and that’s what the deployment entails. The build, the kind of logic that runs the build — that can run anywhere. That doesn’t need to run in the same environment that’s hosting the web server. In fact, it doesn’t! It starts the key to the JAMstack. We put the separation at what happens at request time, serving these static assets, versus what happens at build time, which can be your logic that you run to build and then to the deployment.

Phil Hawksworth: So, this kind of decoupling is a key thing, and I’m going to come back to that later on. We’ve got very good at serving static files, and getting things to a CDN or getting things to the file system (the file server) is somewhere that we’ve seen huge, kind of, advancement over the last few years. There are a lot of tools and processes, now, that can help us do this really well. Just to call out a few services that can serve static assets well and give workflows to getting your build to that environment, they’re the usual suspects that you might imagine the big clouds infrastructure providers, like Azure, AWS, and Google Cloud.

Phil Hawksworth: But then, there are others. So, the top one on the right is a service called Surge, which has been around for a few years. It allows you to run a command in your build environment and deploy your assets through to their hosting environment. Netlify, the next one down, is where I work and we do very much the same thing but we have different automation as well. I could go into it another time. And the one on the bottom, another static hosting environment site, called Now.

Phil Hawksworth: So, there’s a bunch of different options for doing this, and all of these spaces provide different tooling for getting to the CDN, as quickly as possible. Getting your sites deployed in the most seamless way that we can. And they all have something in common where they’re building on the principal of running something locally. I often think of a static site generator as something that we might run in a build which when we run that, it takes things like content and templates and maybe, data from different services and it outputs something which can be served statically.

Phil Hawksworth: We can preview that locally in our static server. Something that is kind of simple to do on any local development environment, and then the process of deploying that is getting that to the hosting environment and ideally, out to a CDN in order to get, kind of, scale. So, with that kind of foundation laid out, I want to address a bit of a common misconception when it comes to JAMstack sites. And I didn’t do myself any favors by opening this up as describing JAMstack sites as being JavaScript, APIs, and Markup. Because the common misconception is that every JAMstack site has to be JavaScript and APIs, and Markup, but this kind of thing that we’ve overlooked is that we don’t have to use all three — every one one of these is, kind of, optional. We can use as much, or as little of these as we like. In the same way that a LAMP stack site wouldn’t necessarily need to be hitting a data base. Now, I’ve built things in the past that are served by an apache server, on a Linux machine, and I’ve been using PHP, but I haven’t been hitting a database and I wouldn’t start to rename a stack necessarily for that.

Phil Hawksworth: So, if we think about what is a JAMstack site, then it could be a bunch of different things. It might be something that’s built out with a static site generator, like Jekyll, pulling content from YAML files to build a site that has no JavaScript, doesn’t hit APIs at all, and we serve it on something, like GitHub Pages. Or, that would be a JAMstack site. Or maybe we’re using a static site generator, like Gatsby, which is, rather in a Ruby environment for Jekyll, now this is a JavaScript static site generator built in the React ecosystem.

Phil Hawksworth: That might be pulling content again, and it’s organizing Markdown files. It might be enriching that with calls to APIs, GraphQL’s APIs. It might be doing things in the browser, like doing JavaScript hydration of populating templates right there in the browser. And it might be served on Amazon S3. Is that a JAMStack site? Yeah, absolutely!

Phil Hawksworth: Moving on to a different static site generator, Hugo, which is built with Go! Again, we might be organizing content in Markdown files, adding interactions in the browser using JavaScript. Maybe not calling any external APIs and maybe hosting that on Google Cloud. Is it JAMstack? Absolutely! You see, I’m building to a theme here. Using something like Nuxt, another static site generator, now built in the View ecosystem. Maybe that’s pulling content from different adjacent files? Again, we might be using JavaScript interactions in the browser, perhaps calling APIs to do things like e-Commerce, serving it another static site. Another hosting infrastructure like Netlify, is it a JAMstack? Yes, it is!

Phil Hawksworth: But we might even go, you know, go off to this side end of the scale, as well. And think about a handmade, progressive web app that we’ve built artisanally, hand-rolled, JavaScript that we built ourselves. We’re packaging it up with webpack. We’re maybe using JavaScript web tokens and calling out to APIs to do authentication, interacting with different browser APIs, hosting it on Azure. Yes, that’s JAMstack as well!

Phil Hawksworth: And, you know, all of these things, and many more, can be considered JAMstack, because they all share one attribute in common and that is none of them are served with an origin server. None of them have to hit a server that performs logic at request time. These are being served as static assets, and then enriched with JavaScript and calls to APIs, afterwards.

Phil Hawksworth: So, again, I just want to reiterate that a JAMstack means it’s capable of being served directly from the CDN. So, I want to just call out some of the impacts and implications of this, because why would we want to do this? Well, the first notion is about security, and we’ve got a greatly reduced surface area for attack, here. If we think about (coming back to this old diagram again), the places where we might have to deal with an attack, we have to secure things like the load balancer, the webservers, the database servers. All of these things, we have to make sure aren’t able to be penetrated by any kind of an attack and, indeed, the CDN.

Phil Hawksworth: If the more pieces we can take out of this puzzle, the fewer places that can be attacked and the fewer places we have to secure. Having few moving parts to attack is really very valuable. At Netlify, we operate our own CDNs, so we get the luxury of being able to monitor the traffic that comes across it, and even though all of the sites hosted on Netlify, all of the JAMstack sites that you might imagine, none of them have a WordPress admin page on them because it’s completely decoupled. There is no WordPress admin, but we see a huge volume of traffic, probing for things like WP Admin, looking for ways in, looking for attack vectors.

Phil Hawksworth: I really love some of the things that Kent C. Dodds has done. I don’t know if you are familiar with the React community, you’ve probably encountered Kent C. Dodds in the past. He doesn’t use WordPress, but he still routes this URL, the WPAdmin. I think he used to route it through to a Rick Roll video on YouTube. He’s certainly been trolling people who have gone probing for it. But, the point is, by decoupling things in that way and, kind of, moving things that happen, build time from what happens in request time, we can just drastically reduce our exposure. We’ve got no moving parts at request time. The moving parts are all completely decoupled at build time. Potentially, on completely, well, necessarily on completely different infrastructure.

Phil Hawksworth: This, of course, also has an impact on performance, as well. Going back to our old friend here, the places we might want to try and improve performance across the stack here, when there’s logic that needs to be executed at these different places. The way that this often happens in traditional stacks is, they start to add layers, add static layers, in order to improve performance. So, in other words, try and find ways that we can start to behave as if it’s static. Not have to perform that logic at each level of the stack in order to speed things up. So, we’re starting to introduce things like caching all over the infrastructure and obvious places we might find to do that is in the web server, rather than perform that logic, we want to serve something immediately without performing that logic.

Phil Hawksworth: So, it’s kind of like a step towards, kind of, being pseudo-static. Likewise in database servers, we want to add caching layers to cache-com queries. Even in the low balance, the whole CDN, you can think of as a cache. But on the traditional stack, we need to figure out how to manage that cache, because not everything will be cached. So, there’s going to some logic about. What needs to be dynamically populated versus what can be cached. And the JAMstack model, everything is cached. Everything is cached from the point that the deployment is done, so we can think about it completely differently.

Phil Hawksworth: This, then, starts to, kind of, hint through to scaling, as well. And by scale, I’m talking about, how do we handle large loads of traffic? Traditional stacks will start to add infrastructure in order to scale. So, yes, to caching. We’re starting to put in place in our traditional stack. That will help — to a degree. What typically happens is, in order to handle large loads of traffic, we’ll start expanding the infrastructure and starting to add more servers, more pieces to handle these requests, costing these things out and estimating the load is a big overhead and it can be a headache for anyone doing technical architecture. It certainly was for me, which is why I was starting to lean much more towards doing the JAMstack approach where I just know that everything is served from the CDN, which is designed by default to handle scale, to handle performance right out of the gate.

Phil Hawksworth: So, I also want to give a nod to developer experience, and the impact this can have there. Now, developer experience should never be seen as something which trumps user experience, but I believe that a good developer experience can reduce friction, can allow for developers to do a much better job of building up to great user experiences!

Phil Hawksworth: So, when we think about where the developer experience lives, and where the areas of concern for a developer are here: well, in a traditional stack, we might need to think about how we get the code to all of these different parts of the infrastructure, and how they all play together. In the JAMstack world, really, what we’re talking about is this box here at the bottom. You know, how do we ran the build and them, how do we automate a deployment to get something served in the first place? And the nice thing is, that in the JAMstack model, what you do in that build is completely up to you.

Phil Hawksworth: That’s a really well-defined problem space, because ultimately, you’re trying to build something you can serve directly from a CDN. You want to pre-render something, using whatever tools you like: whether it’s a static site generator built in Ruby or Python or JavaScript or Go or PHP, you have the freedom to make that choice. And so, that can create a much nicer environment for you to work in. And also, it creates an opportunity to have real developer confidence because a real attribute of JAMstack sites is that they can be much more easily served as immutable and atomic deployment.

Phil Hawksworth: And I want to, kind of, jump away from the slides just for a moment, to describe what that means, because an immutable deployment and an atomic deployment can… (that can just sound a little bit like marketing speak) but what I’m going to do, is I’m going to jump into my browser. Now … actually, I’m going to go back for a second. Let me… just do this.

Phil Hawksworth: Here we are. This will be easier. Jumping right into the page. Now, Scott, you will tell me, Scott, if you can’t see my browser, I’m hoping. Now, assuming everyone can see my browser.

Scott: Everything looks good.

Phil Hawksworth: Excellent! Thank you very much!

Phil Hawksworth: So, what I’m doing here, is I’m using Netlify as an example, as an example of the service. But, this is an attribute which is common to sites that can be hosted, statically. So, when we talk about an immutable deployment, what we mean is, that rather each deployment of code having to touch lots of different parts of the infrastructure, and change lots of things, here we’re not mutating the state of the site on the server. We’re creating an entirely new instance of the site for every deployment that’s happened. And we can do that because the site is a collection of static assets.

Phil Hawksworth: Here, I’m looking at the deployment that have happened from my own website. I’ll give you a treat. There you are, that’s what it looks like. It’s just a blog, it doesn’t look like anything particularly remarkable or exciting. It’s a statically generated blog, but what I have here is every deployment that’s ever happened, lives on in perpetuity, because it’s a collection of static assets that are served from a CDN. Now, I could go back as far as my history can carry me and go and look at the site, as it was back in… when was this? This was August, 2016. And by virtue of it being a set of static assets, we’re able to host this on its own URL that lives on in perpetuity and if I even wanted to, I could decide to go in and publish that deployment.

Phil Hawksworth: So, now, anyone’s who’s looking at my website, if I go back to my website here, if I refresh that page, now that’s being served directly from the CDN with the assets that were there before. Now, I can navigate around again. Here, you can see. Look, I was banging on about this, I was using these terrible terms like isomorphic rendering and talking about the JAMstack back in 2016. So, this is now what’s being served live on my site. Again, because there are mutual deployments that just live on forever. I’m going to just put my own, kind of, peace of mind, I’m going to — is this the first page? Yeah. I’m going to go back to my latest deployment. I’ll have to shut again, and get me back into the real world. Let me just make sure this is okay.

Phil Hawksworth: Okay! Great! So, then now, this is back to serving my previous version, or my latest version of the site. I’ll hop back to keynote. So, this is possible because things are immutable and atomic. The atomic part of that means, again, that the deployment is completely contained. So, you never get the point where some of the assets are available on the web server, but some of them won’t. Only when everything is there in context and everything is there, complete, do we toggle the serving of the site to the new version. Again, this is the kind of thing you can do much more easily if you’re building things out as a JAMstack site that serves directly from the CDN as a bunch of assets.

Phil Hawksworth: I noticed that my timer has reset, after going back and forward from keynote, so I think I have about six or seven minutes left. Tell me, Scott, if—

Scott: So, yeah, we’re still good for about ten minutes.

Phil Hawksworth: Ten minutes? Okay, wonderful!

Scott: There’s no rush.

Phil Hawksworth: Thank you, that should be good!

Phil Hawksworth: So, just switching gear a tiny bit and talking about APIs and services (since APIs is part of JAMstack), the kind of services that we then might be able to use is mostly JAMstack. You know, we might be using services that we built in-house, or we might be using bought-services. There are lots of different providers who can do things for us, and that’s because that’s their expertise. Through APIs, we might be pulling in content from content management systems as a service, and there’s a bunch of different providers for this, who specialize in giving a great content management experience and then, exposing the content through API, so you used to be able to pull them in.

Phil Hawksworth: Likewise, there are different ways that you can serve assets. People like Cloudary are great at this, for doing image optimization, serving assets directly to your sites, again, through APIs. Or what about e-Commerce? You know, there are places like Stripe or Snipcart that can provide us API, so that we don’t have to build these services ourselves and get into the very complex issues that come with trying to build an e-Commerce engine. Likewise, identity, from people like Auth0 who are using Oauth. There are lots of services that are available and we can consume these things through APIs, either in the browser or at build time, or sometimes, a combination of both.

Phil Hawksworth: Now, some people might think, “Well, that’s fine, but I don’t want to give the keys to the kingdom away. I don’t want to risk giving these services over to external providers,” and to that, I say, well, you know, vendors who provide a single service really depend on its success. If there’s a company that’s core business, or entire business, is in building-out an e-Commerce engine, an e-Commerce service for you, they’re doing that for all of their clients, all of their customers, so they really depend on its success and they have the specialist skills to do that.

Phil Hawksworth: So, that kind of de-risks it from you a great deal. Especially when you start to factor in the fact that you can have your technical and service-level contracts to give you that extra security. But it’s not all about bought services, it’s also about services you can build yourselves. Now, there’s a number of ways that this can happen, but sometimes, you absolutely need a little bit of logic on the server. And so far, I’ve just been talking about taking the server out of the equation. So, how do we do that?

Phil Hawksworth: Well, this is where serverless can really come to the rescue. Serverless and JAMstack, they just fit together really beautifully. And when I’m talking about serverless, I’m talking about no functions as a service. I know that serverless can be a bit of a loaded term, but here, I’m talking about functions as a service. Because functions as a service can start to enable a real micro-services architecture. Using things like AWS Lambda or Google Cloud functions or similar functions as a service, can allow you to build out server infrastructure without a server. Now, you can start deploying JavaScript logic to something that just runs on demand.

Phil Hawksworth: And that means, you can start supplementing some of these other services with, maybe, very targeted small services you build yourselves that can run the serverless functions. These kind of smaller services are easier to reason about, understand, build out and they create much greater agility. I want to just mention a few examples and results from JAMstack sites. I’m not going to go down the server list avenue too much, right now. We can, maybe, come back to that in the questions. I really just kind of want to switch gear and, thinking about time a little bit, talk about some examples and results.

Phil Hawksworth: Because there are some types of sites that lend themselves in a very obvious way to a JAMstack site. Things like the documentation for React, or Vuejs, those [inaudible 00:32:40], pre-rendered JAMstacks sites. As do sites for large companies, such as Citrix, this is a great example of Citrix multi-language documentation. You can actually view the video from the JAMstack conference that happened in New York, where Beth Pollock had worked on this project, talked about the change that went on in that project. From building on traditional, non-enterprised infrastructure to a JAMstack approach and building with Jekyll, which is not necessarily the fastest generating static site generator, but still, they saw a huge improvement.

Phil Hawksworth: Beth talked about the turnaround time for updates went from weeks to minutes. Maybe people are kind of frowning at the idea of weeks for updates to sites, but sometimes in big complex infrastructure, with lots of stakeholders and lots of moving parts, this really is the situation we’re often finding ourselves in. Beth also talked about estimating the annual cost savings for this move to a JAMstack site. To get the site running properly, estimated their savings of around 65%. That’s huge kind of savings. Changing gear to a slightly different type of site, something a little closer to home, Smashing Magazine itself is a JAMstack site, which might be a little bit surprising, because on one hand, yes, there’s lots of articles and it’s also content, which is published regularly, but not every minute of the day, for sure.

Phil Hawksworth: So, that might lend itself, perhaps, for something that’s pre-generated, but of course, there’s also a membership area and an event section, and a job board, and e-Commerce, and all of these things. This is all possible on the JAMstack because not only are we pre-rendering, but we’re also enriching things with JavaScript and the front end to call out to APIs, which let some of these things happen. The project that I think I saw Vitaly arrive in the call, so that’s going to be good, we might be able to come back to this in a few minutes.

Phil Hawksworth: But the project that migrated, Smashing Magazine onto a JAMstack approach, I believe, simplified the number of platforms from five, effectively down to one. And I’m using Vitaly’s words directly here: Vitaly talked about having some caching issues, trying to make the site go quickly, using probably every single WordPress caching plug-in out there, and goodness knows, there are a few of them! So, Smashing Magazine saw an improvement in performance, time to first load went from 800 milliseconds to 80 milliseconds. Again, I’m simplifying the infrastructure that served the site up in the first place. So, it’s kind of interesting to see the performance gains that can happen there.

Phil Hawksworth: Another totally different type of site. This is from the Google Chrome team, who built this out to demonstrate at Google I/O this year. This very much feels like an application. This is Minesweeper in a browser. I apologize if you’re watching me play this. I’m not playing it while talking to you; I recorded this sometime ago and it’s agony to see how terrible I seem to be at Minesweeper while trying to record. That’s not a mine, that can’t be!

Phil Hawksworth: Anyway, we’ll move on.

Phil Hawksworth: The point of that is, this is something that feels very much more like an app, and it was built in a way to be very responsible about the way it used JavaScript. The payload to interactive was 25KB. This progressively would download and use other resources along the way, but it meant that the time to interact was under five seconds on a very slow 3G network. So, you can be very responsible with the way you use JavaScript and still package up JavaScript, as part of the experience for a JAMstack site.

Phil Hawksworth: So, I’m kind of mindful of time. We’re almost out of time, so what is the JAMstack? Well, it’s kind of where we started from. JAMstack sites are rendered in advance: they’re not dependent on an origin server (that’s kind of key), and they may be progressively enhanced in the browser with JavaScript. But as we’ve shown, you don’t have to use JavaScript at all. You might just be serving that statically, ready to go, without that. It’s an option available to you.

Phil Hawksworth: This key tenant, I think of, JAMstack sites is they’re served without web service. They’re pre-rendered and done in advance.

Phil Hawksworth: If you’re interested in more, it’s already been mentioned a little bit, there is a conference in London in July — July 9th and 10th. The speakers are going to be talking about all kinds of things to do with performance in the browser, things that you can do in the browser, results of building on the JAMstack and things to do with serverless.

Phil Hawksworth: There’s also a bunch of links in this deck that I will share, after this presentation, including various bits and pieces to do with static site generation, things like headless CMS, the jamstack.org site itself, and a great set of resources on a website called “The New Dynamic” which is just always full of latest information on JAMstack. We’re out of time, so I’ll wrap it up there, and then head back across to questions. So, thanks very much for joining and I’d love to take questions.

Scott: Thanks, Phil. That was amazing, thank you. You made me feel quite old when you pulled up the Minesweeper reference, so—

Phil Hawksworth: (laughs) Yeah, I can’t take any credit for that, but it’s kind of fascinating to see that as well.

Scott: So, I do think Vitaly is here.

Vitaly: Yes, always in the back.

Phil Hawksworth: I see Vitaly’s smiling face.

Vitaly: Hello everyone!

Phil Hawksworth: So, I’m going to hand it over to Vitaly for the Q&A, because I seem to have a bit of a lag on my end, so I don’t want to hold you guys up. Vitaly, I’ll hand it over to you.

Scott: Okay. Thanks, Scott.

Vitaly: Thanks, Scott.

Vitaly: Hello—

Vitaly: Oh, no, I’m back. Hello everyone. Now Scott is back but Phil is gone.

Scott: I’m still here! Still waiting for everything.

Vitaly: Phil is talking. Aw, Phil, I’ve been missing you! I haven’t seen you, for what, for days, now? It’s like, “How unbelievable!” Phil, I have questions!

Vitaly: So, yeah. It’s been interesting for us, actually, to move from WordPress to JAMstack — it was quite a journey. It was quite a project, and the remaining moving parts and all. So, it was actually quite an undertaking. So, I’m wondering, though, what would you say, like if we look at the state of things and if we look in the complexes, itself, that applications have. Especially if you might deal with a lot of legacy, imagine you have to deal with five platforms, maybe seven platforms here, and different things. Maybe, you have an old legacy project in Ruby, you have something lying on PHP, and it’s all kind of connected, but in a hacky way. Right? It might seem like an incredible effort to move to JAMstack. So, what would be the first step?

Scott: So … I mean, I think you’re absolutely right, first of all. Re-platforming any site is a big effort, for sure. Particularly if there’s lots of legacy. Now, the thing that I think is kind of interesting is an approach that I’ve seen getting more popular, is in identifying attributes of the site, parts of the site, that might lend themself more immediately to being pre-generated and served statically than others. You don’t necessarily have to everything as a big bang. You don’t have to do the entire experience in one go. So, one of the examples I shared, kind of, briefly was the Citrix documentations site.

Scott: They didn’t migrate all of Citrix.com across to being JAMstack. They identified a particular part that made sense to be pre-rendered, and they built that part out. And then, what they did was they started to put routing in front of all the requests that would come into their infrastructure. So, it would say, “Okay, well, if it’s in this part of the past the domain, either in the sub-domain or maybe, through to a path, route that through something which is static, then the rest of it, pass that through to the rest of the infrastructure.

Scott: And I’ve seen that happen, time and time again, where with some intelligent redirects, which, thankfully, is something that you can do reasonably simply on the JAMstack. You can start to put fairly expressive redirect engines in front of the site. It means that you can pass things through just that section of the site that you tried to take on as a JAMstack project. So, choosing something and focusing on that first, rather than trying to do a big bang, where you do all of the legacy and migration in one. I think that’s key, because, yeah, trying to do everything at once is pretty tough.

Vitaly: It’s interesting, because just, I think, two days, maybe even today, Chris Coyier wrote an article renaming JAMstack to SHAMstack where, essentially, it’s all about JavaScript in which we need think about steady coasting, because JavaScript could be hosted steadily as well. And it’s interesting, because he was saying exactly that. He actually — when we think about JAMstack, very often, we kind of tend to stay in camps. It’s either, fully rendered and it lives a static-thing. Somewhere there, in a box and it’s served from a city and that’s it, or it’s fully-expressive and reactive and everything you ever wanted. And actually, he was really right about a few things, like identifying some of the things that you can off-load, to aesthetic-side, generated assets, so to say, to that area.

Vitaly: And, JAMstackify, if you might, say some of the fragments of your architecture. Well, that’s a new term, I’m just going to coin, right there! JAMstackify.

Phil Hawksworth: I’m going to register the domain quickly, before anybody else.

Phil Hawksworth: And it’s a nice approach. I think, it kind of makes my eye twitch a little bit when I hear that Chris Coyier has been redubbing it the SHAMstack, because it makes it sound like he thinks it’s a shambles. But I know that he’s really talking about static-hosting and markup, which I—

Vitaly: Yes, that’s right.

Phil Hawksworth: I really like, because the term JAMstack can be really misleading, because it’s trying to cover so many different things and the point I was trying to, I probably hammered it many times in that slide, is that it can be all kinds of things. It’s so broad, but the key is pre-rendering and hosting the core of the sites statically. It’s very easy for us to get into religious wars about where it needs to be a React site. It has to be a React app, in order to be a JAMstack site, or it’s a React app, so it can’t be JAMstack. But, really, the crux of it is, whether you use JavaScript or not, whether you’re calling APIs or not, if you pre-render and get things into a static host that can be very performant, that’s the core of JAMstack.

Vitaly: Yes, absolutely.

Phil Hawksworth: We’re very fortunate that browser’s are getting so much more capable, and the APIs that are there within browser’s themselves can allow us to do much more as well. So, that kind of opens the doors even further, but it doesn’t mean that everything that we build as a JAMstack site has to make use of everything. Depending on what we’re trying to deliver, that’s how we should start to choose the tools that we’re playing with to deploy those things.

Vitaly: Absolutely. We have Doran here. Doran, I think I know, Doran. I have a feeling that I know Doran. He’s asking, “Do you expect serverless to be gravitating towards seamless integration with JAMstack from [inaudible 00:44:36]? What is referred to as the A in JAM.

Phil Hawksworth: That’s a great question, because I think, serverless functions are — they just go so well with JAMstack sites, because in many ways, in fact, I think someone once asked me if JAMstack sites are serverless, and so I squirmed about that question, because serverless is such a loaded term. But, in many ways, it’s bang-on because I was talking, time and time again, about there’s no origin server. There’s no server infrastructure for you to manage. In fact, I once wrote a blog post called “Web Serverless,” because the world needs another buzz term, doesn’t it?

Phil Hawksworth: And really, the kind of point of that was, yes, we’re building things without servers. We don’t want to have to manage these servers, and serverless functions, or functions as a service, just fits into that perfectly. So, in the instances that you do need an API that you want to make a request to, where really it’s not sensible for you to make that request directly from the browser. So, for instance, if you’ve got secrets, or keys, in that request, you might not want those requests — that information — ever exposed in the client. But we can certainly proxy those things, and typically, traditionally, what we need to do then, is spin-up a server, have some infrastructure that was effectively doing little more than handling requests, adding security authentication to it and passing those requests on, proxying them back.

Phil Hawksworth: Serverless functions are perfect for that. They’re absolutely ideal for that. So, I sometimes think of serverless functions, or functions of a service, almost as like an escape hatch, where you just need some logic on a server, but you don’t want to have to create an entire infrastructure. And you can do more and more with that, and stipe the development pipelines for, development workflows, for functions as a service is maturing. It’s getting more accessible for JavaScript developers to be able to build these things out. So, yeah, I really think those two things go together very nicely.

Vitaly: All right, that’s a very comprehensive answer. Actually, I attended a talk just recently, where a front-end engineer from Amazon was speaking about serverless and Lamda functions they’re using — I was almost gone. He was always speaking about Docker, and Kubernetes, and all those things, Devox World, I was sitting there, thinking, “How did he end up there. I don’t understand what’s going on!” I had no idea what’s going on.

Phil Hawksworth: Exactly, but the thing is, it used to be the… I was… I accepted that I didn’t understand any of that world, but I didn’t have any desire to, since that was for an entirely different discipline. And that discipline is still really important. You know, people who are designing infrastructure — that’s still really key. But, it just feels, now, that I’m tempted. As someone with a front-end development background, as a JavaScript developer, I’m much more tempted to want to play in that world, because the tools are coming, kind of, closer to me.

Phil Hawksworth: It’s much more likely that I might be able to use some of these things, and deliver things kind of safely, rather than just as an experiment of my own, which is where I used to be dappling. So, it feels like we’re becoming more powerful as web developers, which is exciting to me.

Vitaly: Like Power Rangers, huh?

Vitaly: One thing I do want to ask you, though, and this is actually something we discussed already, maybe, a week ago, but I still wanted to bring it up, because the one thing that you mentioned in the session was the notion of having a stand-alone instance of every single deploy, which is really cool. The question, though, is if you have a large assignment, with tens of thousands of pages, you really don’t want to redeploy every thing, every single time. So, essentially, if you have, like, if you’re mostly using the static side of things. So, we had this idea for a while and I know this is actually something that you brought up last time. The idea of atomic deployments.

Vitaly: Where you actually, literally, were served some sort of div between two different versions of snapshots of the set-up. So, if you say, change the header everywhere, then, of course, every single page has to be redeployed. But if you change, maybe, a component, like let’s say, carousel, that maybe effects only a 1000 pages, then it would make sense to redeploy 15000 pages. But only this 1000. So, can we get there? Is it a magical idea that’s out there, or is it something that’s quite tangible, at this point?

Phil Hawksworth: I think that is, probably, the Holy Grail for static site generators and this kind of model because, certainly, you’ve identified probably the biggest hurdle to overcome. Or the biggest ceiling that you bump into. And that is websites that have many, tens of thousands or hundreds of thousands, or millions of URLs — the notion that the build can become very long. Being able to detect which URL’s will change, based on a code change, is a challenge. It’s not insurmountable, but it’s a big challenge. Understanding what the dependency graph is across your entire site and then, intelligently deploying that — that’s really tough to do.

Phil Hawksworth: Because as you mentioned, a component change might have very far-reaching implications but you — it’s difficult, always, to know how that’s going to work. So, there are a number of static site generators, the projects that are putting some weight behind that challenge, and trying to figure out how they do partial-regeneration and incremental builds. I’m very excited that the prospect that that might get solved day, but at the moment, it’s definitely a big challenge. You can start to do things like try to logically sharred your site, and think about, again, kind of similar to the migration issue. Well, this section I know is independent in terms of its, kind of, some of the assets that it uses, or the type of content that lives there, so I can deploy them individually.

Phil Hawksworth: But that’s not ideal to me. That’s not really the perfect scenario. One of the approaches that I’ve explored a little bit, just as a proof of concept, is thinking about how you do things, like, making intelligent use of 404s. So, for instance, a big use case for very large signs, maybe news sites is, when they need a URL when a breaking news story happens, they need to be first to get it deploy out there. They need to get a URL up there. Things like the BBC News, you’ll see that the news story will arrive on the website, and then overtime, they’ll add to it, incrementally, but getting there first is key. So, having a build that takes 10 minutes, 20 minutes, whatever it’s going to be, that could be a problem.

Phil Hawksworth: But, if they’re content is abstracted and maybe used to have been called from an API. I mentioned content management systems that are abstracted, like Contentful, or Sanity, or a bunch of those. Anything that has a content API changes to that content structure that will trigger a build and we’ll go through the run, but the other thing that could happen is that, well, if you publish your URL for that, and then publicize that URL, even if the build hasn’t run, if someone hicks that URL, if the first stop on its 404 is instead of saying, “We haven’t got it,” is actually to hit that API directly, then, you can say, “Well, the build hasn’t finished to populate that yet, but I can do it in the client.” I can go directly to the API, get that, populate it in the client.

Vitaly: Hmm, interesting.

Phil Hawksworth: So, even while the build is still happening, you can start populating those things. And then, once the build’s finished, of course it wouldn’t hit a 404. You would hit that statically running page. So, there are techniques and there are strategies to tackle it, but still, it’s a very long, rambling answer, I’m sorry, but my conclusion is, yeah, that’s a challenge. Fingers crossed we’ll get better strategies.

Vitaly: Yeah, that’s great. So, I’m wondering, so, at this point, we really aren’t thinking, not just what the performance in terms of the content delivering, but those in performance in terms of the build speed. Like building deployment. So, this is also something that we’ve been looking into for, quite a bit of time now, as well.

Vitaly: One more thing I wanted to ask you. So, this is interesting, like this technique that you mentioned. How do you learn about this? This is just something people tend to publish on their own blogs or, is it some medium or is there a central repository where you can get some sort of case studios, of how JAMstack—how companies moved while unloading, or have failed to move to JAMstack.

Phil Hawksworth: So, it’s kind of maturing this landscape a little bit, at the moment. I mean, some of these examples, I think, I’m in a very fortunate position, I work somewhere that I’m in a role that I’m playing with the toys, coming up with interesting ways to use it and start experimenting with them. So, these proofs of concepts are, kind of, things that I get to experiment with and try to address these challenges. But the, I kind of mentioned earlier, a case study that was shown at the JAMstack conference in New York, and certainly, events like that, we’re starting to see best practices or industry practices and industry approaches being talked about at those kind of events. And certainly, I want to see more and work on more case studies to get in places like on Smashing Magazines, so that we can share this information much more readily.

Phil Hawksworth: I think, large companies and the enterprise space, is gradually adopting JAMstack, in different places, in different ways, but the world is still sloped to get out there, so I think, each time a company adopts it and shares their experience, we all get to learn from that. But I really want to see more and more of these case studies get published, so that we can lean particularly about how these kind of challenges are overcome.

Vitaly: Alright, so, then, maybe just the last question from me, because I always like to read questions. So, the JAMstack land, if you could change something, maybe there is something that you desperately would love to see, beyond deployments. Anything else that would be really making you very happy? That would make your day? What would that be? What’s on your wishlist, for JAMstack?

Phil Hawksworth: What a question. I mean, had we not talked about incremental builds, that would be—

Vitaly: We did. That’s too late, now. This card has been passed, already. We need something else.

Phil Hawksworth: So—

Vitaly: What I mean, like on a platform, if you looked at the back platform, there are so many exciting things happening. We have Houdini, we have web components coming, and everything, since you could be changing the entire landscape of all the right components. On the other side, we have all this magical, fancy world with SS NGS, and, of course, obviously, we also have single-page applications and all. What are you most excited about?

Phil Hawksworth: I’m going to be obtuse here, because there is so much stuff that’s going on, it’s exciting, and there is so many new capabilities that you can make use of in the browser. The thing that I really get excited about is people showing restraint (laughs) and as I said, dull answer, but I love seeing great executions that are done with restraint, in a thoughtful — about the wider audience. It’s really good fun, and gratifying to build with the shiniest new JavaScript library or the new browser API that does, I don’t know, scratch and sniff capabilities in the browser, which we desperately need, any day now.

Phil Hawksworth: But I really like seeing things that I know are going to work in many, many places. They’re going to be really performant, are going to be sympathetic to the browsers that exist — not just on the desks of CEOs and CTOs who got the snazzy toys, but also people who have got much lower-powered devices, or they’ve got challenging network conditions and those kinds of things. I like seeing interesting experiences, and rich experiences, delivered in a way that are sympathetic to the platform, and kind of, compassionate for the wider audience, because I think the web reaches much further than us, the developers, who build things for it. And I get excited by seeing interesting things done, in ways that reach more people.

Phil Hawksworth: That’s probably not the answer you were necessarily—

Vitaly: Oh, that’s a nice ending. Thank you so much. No, that’s perfect, that really is. All right, I felt everything went good! Thank you so much for being with us! I’m handing out to Scott!

Phil Hawksworth: Great!

Vitaly: I’m just here to play questions and answers. So, thank you so much, Phil! I’m still here, but Scott, the stage is yours, now! Maybe you can share with us what’s coming up next on Smashing TV?

Scott: I will, but first, Phil, I can’t wait to see how the implementation of scratch-and-sniff API work. Sounds very interesting. And, Vitaly, JAMstackify is already taken.

Vitaly: (dejected) Taken?! Can we buy it?

Scott: No, it exists!

Vitaly: Well, that’s too late. I’m always late.

Phil Hawksworth: That’s exciting in its own way.

Vitaly: That’s the story of my life. I’m always late.

Scott: Members coming up next, I believe, Thursday, the 13th, we have my ol’ pa, Zach Leatherman, talking about what he talks about best, which is fonts. So, he’s talking about the Five Y’s of Font Implementations. And then, I’m also very interested in the one we have coming up on the 19th, which is editing video, with JavaScript and CSS, with Eva Faria. So, stay tuned for both of those.

Scott: So, that is again, next Thursday, with Zach Leatherman, and then on the 19th, with Eva, who will be talking about editing video in JavaScript and CSS. So, on that note, Phil, I can’t see you anymore, are you still there?

Phil Hawksworth: I’m here!

Scott: On that note, thank you very much everyone! Also, is anybody in the, kind of, close to Toronto area? Or anybody that’s ever wanted to visit Toronto? We have a conference coming up at the end of June, and there’s still a few tickets left. So, maybe we’ll see some of you there.

Vitaly: Thank you so much, everyone else!

Vitaly: Oh, by the way, just one more thing! Maybe Phil mentioned it, but we also have the JAMstack Conference in London, in July. So, that’s something to watch out for, as well. But I’m signing off and going to get my salad! Not sure what you—

Scott: Okay, goodbye, everybody!

Vitaly: All right, bye-bye, everyone.

That’s A Wrap!

We kindly thank Smashing Members from the very bottom of our hearts for their continuous and kind support — and we can’t wait to host more webinars in the future.

Also, Phil will be MCing at SmashingConf Toronto 2019 next week and at JAMstack_conf — we’d love to see you there as well!

Please do let us know if you find this series of interviews useful, and whom you’d love us to interview, or what topics you’d like us to cover and we’ll get right to it.

Smashing Editorial
(ra, il)

Source: Smashing Magazine, JAMstack Fundamentals: What, What And How

Optimizing Google Fonts Performance

dreamt up by webguru in Uncategorized | Comments Off on Optimizing Google Fonts Performance

Optimizing Google Fonts Performance

Optimizing Google Fonts Performance

Danny Cooper



It’s fair to say Google Fonts are popular. As of writing, they have been viewed over 29 trillion times across the web and it’s easy to understand why — the collection gives you access to over 900 beautiful fonts you can use on your website for free. Without Google Fonts you would be limited to the handful of “system fonts” installed on your user’s device.

System fonts or ‘Web Safe Fonts’ are the fonts most commonly pre-installed across operating systems. For example, Arial and Georgia are packaged with Windows, macOS and Linux distributions.

Like all good things, Google Fonts do come with a cost. Each font carries a weight that the web browser needs to download before they can be displayed. With the correct setup, the additional load time isn’t noticeable. However, get it wrong and your users could be waiting up to a few seconds before any text is displayed.

Google Fonts Are Already Optimized

The Google Fonts API does more than just provide the font files to the browser, it also performs a smart check to see how it can deliver the files in the most optimized format.

Let’s look at Roboto, GitHub tells us that the regular variant weighs 168kb.

Roboto Regular has a file size of 168kb

168kb for a single font variant. (Large preview)

However, if I request the same font variant from the API, I’m provided with this file. Which is only 11kb. How can that be?

When the browser makes a request to the API, Google first checks which file types the browser supports. I’m using the latest version of Chrome, which like most browsers supports WOFF2, so the font is served to me in that highly compressed format.

If I change my user-agent to Internet Explorer 11, I’m served the font in the WOFF format instead.

Finally, if I change my user agent to IE8 then I get the font in the EOT (Embedded OpenType) format.

Google Fonts maintains 30+ optimized variants for each font and automatically detects and delivers the optimal variant for each platform and browser.

— Ilya Grigorik, Web Font Optimization

This is a great feature of Google Fonts, by checking the user-agent they are able to serve the most performant formats to browsers that support those, while still displaying the fonts consistently on older browsers.

Browser Caching

Another built-in optimization of Google Fonts is browser caching.

Due to the ubiquitous nature of Google Fonts, the browser doesn’t always need to download the full font files. SmashingMagazine, for example, uses a font called ‘Mija’, if this is the first time your browser has seen that font, it will need to download it completely before the text is displayed, but the next time you visit a website using that font, the browser will use the cached version.

As the Google Fonts API becomes more widely used, it is likely visitors to your site or page will already have any Google fonts used in your design in their browser cache.

— FAQ, Google Fonts

The Google Fonts browser cache is set to expire after one year unless the cache is cleared sooner.

Note: Mija isn’t a Google Font, but the principles of caching aren’t vendor-specific.

Further Optimization Is Possible

While Google invests great effort in optimizing the delivery of the font files, there are still optimizations you can make in your implementation to reduce the impact on page load times.

1. Limit Font Families

The easiest optimization is to simply use fewer font families. Each font can add up to 400kb to your page weight, multiply that by a few different font families and suddenly your fonts weigh more than your entire page.

I recommend using no more than two fonts, one for headings and another for content is usually sufficient. With the right use of font-size, weight, and color you can achieve a great look with even one font.

Example showing three weights of a single font family (Source Sans Pro)

Three weights of a single font family (Source Sans Pro). (Large preview)

2. Exclude Variants

Due to the high-quality standard of Google Fonts, many of the font families contain the full spectrum of available font-weights:

  • Thin (100)
  • Thin Italic (100i)
  • Light (300)
  • Light Italic (300i)
  • Regular (400)
  • Regular Italic (400i)
  • Medium (600)
  • Medium Italic (600i)
  • Bold (700)
  • Bold Italic (700i)
  • Black (800)
  • Black Italic (800i)

That’s great for advanced use-cases which might require all 12 variants, but for a regular website, it means downloading all 12 variants when you might only need 3 or 4.

For example, the Roboto font family weighs ~144kb. If however you only use the Regular, Regular Italic and Bold variants, that number comes down to ~36kb. A 75% saving!

The default code for implementing Google Fonts looks like this:

<link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet">

If you do that, it will load only the ‘regular 400’ variant. Which means all light, bold and italic text will not be displayed correctly.

To instead load all the font variants, we need to specify the weights in the URL like this:

<link href="https://fonts.googleapis.com/css?family=Roboto:100,100i,300,300i,400,400i,500,500i,700,700i,900,900i" rel="stylesheet">

It’s rare that a website will use all variants of a font from Thin (100) to Black (900), the optimal strategy is to specify just the weights you plan to use:

<link href="https://fonts.googleapis.com/css?family=Roboto:400,400i,600" rel="stylesheet">

This is especially important when using multiple font families. For example, if you are using Lato for headings, it makes sense to only request the bold variant (and possibly bold italic):

<link href="https://fonts.googleapis.com/css?family=Lato:700,700i" rel="stylesheet">

3. Combine Requests

The code snippet we worked with above makes a call to Google’s servers (fonts.googleapis.com), that’s called an HTTP request. Generally speaking, the more HTTP requests your web page needs to make, the longer it will take to load.

If you wanted to load two fonts, you might do something like this:

<link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400i,600" rel="stylesheet">
<link href="https://fonts.googleapis.com/css?family=Roboto" rel="stylesheet">

That would work, but it would result in the browser making two requests. We can optimize that by combining them into a single request like this:

<link href="https://fonts.googleapis.com/css?family=Roboto|Open+Sans:400,400i,600" rel="stylesheet">

There is no limit to how many fonts and variants a single request can hold.

4. Resource Hints

Resource hints are a feature supported by modern browsers which can boost website performance. We are going to take a look at two types of resource hint: ‘DNS Prefetching’ and ‘Preconnect’.

Note: If a browser doesn’t support a modern feature, it will simply ignore it. So your web page will still load normally.

DNS Prefetching

DNS prefetching allows the browser to start the connection to Google’s Fonts API (fonts.googleapis.com) as soon as the page begins to load. This means that by the time the browser is ready to make a request, some of the work is already done.

To implement DNS prefetching for Google Fonts, you simply add this one-liner to your web pages <head>:

<link rel="dns-prefetch" href="//fonts.googleapis.com">
Preconnect

If you look at the Google Fonts embed code it appears to be a single HTTP request:

<link href="https://fonts.googleapis.com/css?family=Roboto:400,400i,700" rel="stylesheet">

However, if we visit that URL we can see there are three more requests to a different URL, https://fonts.gstatic.com. One additional request for each font variant.

Source code of a Google Fonts Request

(View source) (Large preview)

The problem with these additional requests is that the browser won’t begin the processes to make them until the first request to https://fonts.googleapis.com/css is complete. This is where Preconnect comes in.

Preconnect could be described as an enhanced version of prefetch. It is set on the specific URL the browser is going to load. Instead of just performing a DNS lookup, it also completes the TLS negotiation and TCP handshake too.

Just like DNS Prefetching, it can be implemented with one line of code:

<link rel="preconnect" href="https://fonts.gstatic.com/" crossorigin>

Just adding this line of code can reduce your page load time by 100ms. This is made possible by starting the connection alongside the initial request, rather than waiting for it to complete first.

5. Host Fonts Locally

Google Fonts are licensed under a ‘Libre’ or ‘free software’ license, which gives you the freedom to use, change and distribute the fonts without requesting permission. That means you don’t need to use Google’s hosting if you don’t want to — you can self-host the fonts!

All of the fonts files are available on Github. A zip file containing all of the fonts is also available (387MB).

Lastly, there is a helper service that enables you to choose which fonts you want to use, then it provides the files and CSS needed to do so.

There is a downside to hosting fonts locally. When you download the fonts, you are saving them as they are at that moment. If they are improved or updated, you won’t receive those changes. In comparison, when requesting fonts from the Google Fonts API, you are always served the most up-to-date version.

Google Fonts API Request showing a last modified date

Google Fonts API Request. (Large preview)

Note the lastModified parameter in the API. The fonts are regularly modified and improved.

6. Font Display

We know that it takes time for the browser to download Google Fonts, but what happens to the text before they are ready? For a long time, the browser would show blank space where the text should be, also known as the “FOIT” (Flash of Invisible Text).

Recommended Reading: FOUT, FOIT, FOFT” by Chris Coyier

Showing nothing at all can be a jarring experience to the end user, a better experience would be to initially show a system font as a fallback and then “swap” the fonts once they are ready. This is possible using the CSS font-display property.

By adding font-display: swap; to the @font-face declaration, we tell the browser to show the fallback font until the Google Font is available.

    @font-face {
    font-family: 'Roboto';
    src: local('Roboto Thin Italic'),
  url(https://fonts.gstatic.com/s/roboto/v19/KFOiCnqEu92Fr1Mu51QrEz0dL-vwnYh2eg.woff2)
  format('woff2');
    font-display: swap;
  }

In 2019 Google, announced they would add support for font-display: swap. You can begin implementing this right away by adding an extra parameter to the fonts URL:

https://fonts.googleapis.com/css?family=Roboto&display=swap

7. Use the Text Parameter

A little known feature of the Google Fonts API is the text parameter. This rarely-used parameter allows you to only load the characters you need.

For example, if you have a text-logo that needs to be a unique font, you could use the text parameter to only load the characters used in the logo.

It works like this:

https://fonts.googleapis.com/css?family=Roboto&text=CompanyName

Obviously, this technique is very specific and only has a few realistic applications. However, if you can use it, it can cut down the font weight by up to 90%.

Note: When using the text parameter, only the “normal” font-weight is loaded by default. To use another weight you must explicitly specify it in the URL.

https://fonts.googleapis.com/css?family=Roboto:700&text=CompanyName

Wrapping Up

With an estimated 53% of the top 1 million websites using Google Fonts, implementing these optimizations can have a huge impact.

How many of the above have you tried? Let me know in the comments section.

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Optimizing Google Fonts Performance

How To Create A PDF From Your Web Application

dreamt up by webguru in Uncategorized | Comments Off on How To Create A PDF From Your Web Application

How To Create A PDF From Your Web Application

How To Create A PDF From Your Web Application

Rachel Andrew



Many web applications have the requirement of giving the user the ability to download something in PDF format. In the case of applications (such as e-commerce stores), those PDFs have to be created using dynamic data, and be available immediately to the user.

In this article, I’ll explore ways in which we can generate a PDF directly from a web application on the fly. It isn’t a comprehensive list of tools, but instead I am aiming to demonstrate the different approaches. If you have a favorite tool or any experiences of your own to share, please add them to the comments below.

Starting With HTML And CSS

Our web application is likely to be already creating an HTML document using the information that will be added to our PDF. In the case of an invoice, the user might be able to view the information online, then click to download a PDF for their records. You might be creating packing slips; once again, the information is already held within the system. You want to format that in a nice way for download and printing. Therefore, a good place to start would be to consider if it is possible to use that HTML and CSS to generate a PDF version.

CSS does have a specification which deals with CSS for print, and this is the Paged Media module. I have an overview of this specification in my article “Designing For Print With CSS”, and CSS is used by many book publishers for all of their print output. Therefore, as CSS itself has specifications for printed materials, surely we should be able to use it?

The simplest way a user can generate a PDF is via their browser. By choosing to print to PDF rather than a printer, a PDF will be generated. Sadly, this PDF is usually not altogether satisfactory! To start with, it will have the headers and footers which are automatically added when you print something from a webpage. It will also be formatted according to your print stylesheet — assuming you have one.

The problem we run into here is the poor support of the fragmentation specification in browsers; this may mean that the content of your pages breaks in unusual ways. Support for fragmentation is patchy, as I discovered when I researched my article, “Breaking Boxes With CSS Fragmentation”. This means that you may be unable to prevent suboptimal breaking of content, with headers being left as the last item on the page, and so on.

In addition, we have no ability to control the content in the page margin boxes, e.g. adding a header of our choosing to each page or page numbering to show how many pages a complex invoice has. These things are part of the Paged Media spec, but have not been implemented in any browser.

My article “A Guide To The State Of Print Stylesheets In 2018” is still accurate in terms of the type of support that browsers have for printing directly from the browser, using a print stylesheet.

Printing Using Browser Rendering Engines

There are ways to print to PDF using browser rendering engines, without going through the print menu in the browser, and ending up with headers and footers as if you had printed the document. The most popular options in response to my tweet were wkhtmltopdf, and printing using headless Chrome and Puppeteer.

wkhtmltopdf

A solution that was mentioned a number of times on Twitter is a commandline tool called wkhtmltopdf. This tool takes an HTML file or multiple files, along with a stylesheet and turns them into a PDF. It does this by using the WebKit rendering engine.

https://platform.twitter.com/widgets.js

Essentially, therefore, this tool does the same thing as printing from the browser, however, you will not get the automatically added headers and footers. On this positive side, if you have a working print stylesheet for your content then it should also nicely output to PDF using this tool, and so a simple layout may well print very nicely.

Unfortunately, however, you will still run into the same problems as when printing directly from the web browser in terms of lack of support for the Paged Media specification and fragmentation properties, as you are still printing using a browser rendering engine. There are some flags that you can pass into wkhtmltopdf in order to add back some of the missing features that you would have by default using the Paged Media specification. However, this does require some extra work on top of writing good HTML and CSS.

Headless Chrome

Another interesting possibility is that of using Headless Chrome and Puppeteer to print to PDF.

https://platform.twitter.com/widgets.js

However once again you are limited by browser support for Paged Media and fragmentation. There are some options which can be passed into the page.pdf() function. As with wkhtmltopdf, these add in some of the functionality that would be possible from CSS should there be browser support.

It may well be that one of these solutions will do all that you need, however, if you find that you are fighting something of a battle, it is probable that you are hitting the limits of what is possible with current browser rendering engines, and will need to look for a better solution.

JavaScript Polyfills For Paged Media

There are a few attempts to essentially reproduce the Paged Media specification in the browser using JavaScript — essentially creating a Paged Media Polyfill. This could give you Paged Media support when using Puppeteer. Take a look at paged.js and Vivliostyle.

https://platform.twitter.com/widgets.js

Using A Print User Agent

If you want to stay with an HTML and CSS solution then you need to look to a User Agent (UA) designed for printing from HTML and CSS, which has an API for generating the PDF from your files. These User Agents implement the Paged Media specification and have far better support for the CSS Fragmentation properties; this will give you greater control over the output. Leading choices include:

A print UA will format documents using CSS — just as a web browser does. As with browser support for CSS, you need to check the documentation of these UAs to find out what they support. For example, Prince (which I am most familiar with) supports Flexbox but not CSS Grid Layout at the time of writing. When sending your pages to the tool that you are using, typically this would be with a specific stylesheet for print. As with a regular print stylesheet, the CSS you use on your site will not all be appropriate for the PDF version.

Creating a stylesheet for these tools is very similar to creating a regular print stylesheet, making the kind of decisions in terms of what to display or hide, perhaps using a different font size or colors. You would then be able to take advantage of the features in the Paged Media specification, adding footnotes, page numbers, and so on.

In terms of using these tools from your web application, you would need to install them on your server (having bought a license to do so, of course). The main problem with these tools is that they are expensive. That said, given the ease with which you can then produce printed documents with them, they may well pay for themselves in developer time saved.

It is possible to use Prince via an API, on a pay per document basis, via a service called DocRaptor. This would certainly be a good place for many applications to start as if it looked as if it would become more cost effective to host your own, the development cost of switching would be minimal.

A free alternative, which is not quite as comprehensive as the above tools but may well achieve the results you need, is WeasyPrint. It doesn’t fully implement all of Paged Media, however, it implements more than a browser engine does. Definitely, one to try!

Other tools which claim to support conversion from HTML and CSS include PDFCrowd, which boldly claims to support HTML5, CSS3 and JavaScript. I couldn’t, however, find any detail on exactly what was supported, and if any of the Paged Media specification was. Also receiving a mention in the responses to my tweet was mPDF.

Moving Away From HTML And CSS

There are a number of other solutions, which move away from using HTML and CSS and require you to create specific output for the tool. A couple of JavaScript contenders are as follows:

https://platform.twitter.com/widgets.js

Recommendations

Other than the JavaScript-based approaches, which would require you to create a completely different representation of your content for print, the beauty of many of these solutions is that they are interchangeable. If your solution is based on calling a commandline tool, and passing that tool your HTML, CSS, and possibly some JavaScript, it is fairly straightforward to switch between tools.

In the course of writing this article, I also discovered a Python wrapper which can run a number of different tools. (Note that you need to already have the tools themselves installed, however, this could be a good way to test out the various tools on a sample document.)

For support of Paged Media and fragmentation, Prince, Antenna House, and PDFReactor are going to come out top. As commercial products, they also come with support. If you have a budget, complex pages to print to PDF, and your limitation is developer time, then you would most likely find these to be the quickest route to have your PDF creation working well.

However, in many cases, the free tools will work well for you. If your requirements are very straightforward then wkhtmltopdf, or a basic headless Chrome and Puppeteer solution may do the trick. It certainly seemed to work for many of the people who replied to my original tweet.

If you find yourself struggling to get the output you want, however, be aware that it may be a limitation of browser printing, and not anything you are doing wrong. In the case that you would like more Paged Media support, but are not in a position to go for a commercial product, perhaps take a look at WeasyPrint.

I hope this is a useful roundup of the tools available for creating PDFs from your web application. If nothing else, it demonstrates that there are a wide variety of choices, if your initial choice isn’t working well.

Please add your own experiences and suggestions in the comments, this is one of those things that a lot of us end up dealing with, and personal experience shared can be incredibly helpful.

Further Reading

A roundup of the various resources and tools mentioned in this article, along with some other useful resources for working with PDF files from web applications.

Specifications

Articles and Resources

Tools

Smashing Editorial
(il)

Source: Smashing Magazine, How To Create A PDF From Your Web Application

Unleash The Power Of Path Animations With SVGator

dreamt up by webguru in Uncategorized | Comments Off on Unleash The Power Of Path Animations With SVGator

Unleash The Power Of Path Animations With SVGator

Unleash The Power Of Path Animations With SVGator

Mikołaj Dobrucki



(This is a sponsored article.) Last year, a comprehensive introduction to the basic use of SVGator was published here on Smashing Magazine. If you’d like to learn about the fundamentals of SVGator, setting up your first projects, and creating your first animations, we strongly recommended you read it before continuing with this article.

Today, we’ll take a second look to explore some of the new features that have been added to it over the last few months, including the brand new Path Animator.

Note: Path Animator is a premium feature of SVGator and it’s not available to trial users. During a seven-day trial, you can see how Path Animator works in the sample project you’ll find in the app, but you won’t be able to apply it to your own SVGs unless you’re opted-in for a paid plan. SVGator is a subscription-based service. Currently, you can choose between a monthly plan ($18USD/month) and a yearly plan ($144USD total, $12USD/month). For longer projects, we recommend you consider the yearly option.

Path Animator is just the first of the premium features that SVGator plans to release in the upcoming months. All the new features will be available to all paid users, no matter when they subscribed.

The Charm Of Path Animations

SVG path animations are by no means a new thing. In the last few years, this way of enriching vector graphics has been heavily used all across the web:

Animation by Codrops
Animation by Codrops (Original demo) (Large preview)

Path animations gained popularity mostly because of their relative simplicity: even though they might look impressive and complex at first glance, the underlying rule is in fact very simple.

How Do Path Animations Work?

You might think that SVG path animations require some extremely complicated drawing and transform functions. But it’s much simpler than it looks. To achieve effects similar to the example above, you don’t need to generate, draw, or animate the actual paths — you just animate their strokes. This brilliant concept allows you to create seemingly complex animations by animating a single SVG attribute: stroke-dashoffset.

Animating this one little property is responsible for the entire effect. Once you have a dashed line, you can play with the position of dashes and gaps. Combine it with the right settings and it will give you the desired effect of a self-drawing SVG path.

If this still sounds rather mysterious or you’d just like to learn about how path animations are made in more detail, you will find some useful resources on this topic at the end of the article.

No matter how simple path animations are compared with what they look like, don’t think coding them is always straightforward. As your files get more complicated, so does animating them. And this is where SVGator comes to the rescue.

Furthermore, sometimes you might prefer not to touch raw SVG files. Or maybe you’re not really fond of writing code altogether. Then SVGator has got you covered. With the new Path Animator, you can create even the most complex SVG path animations without touching a line of code. You can also combine coding with using SVGator.

To better understand the possibilities that Path Animator gives us, we will cover three separate examples presenting different use cases of path animations.

Example #1: Animated Text

In the first example, we will animate text, creating the impression of self-writing letters.

Final result of the first example
Final result of the first example (Large preview)

Often used for lettering, this cute effect can also be applied to other elements, such as drawings and illustrations. There’s a catch, though: the animated element must be styled with strokes rather than fills. Which means, for our text, that we can’t use any existing font.

Outlining fonts, no matter how thin, always results in closed shapes rather than open paths. There are no regular fonts based on lines and strokes.

Outlined fonts are not suitable for self-drawing effects with Path Animator

Outlined fonts are not suitable for self-drawing effects with Path Animator. (Large preview)

Path animations require strokes - these paths would work great with Path Animator

Path animations require strokes. These paths would work great with Path Animator. (Large preview)

Therefore, if we want to animate text using path animations we need to draw it ourselves (or find some ready-made vector letters suitable for this purpose). When drawing your letters, feel free to use some existing font or typography as a reference — don’t violate any copyright, though! Just keep in mind it’s not possible to use fonts out of the box.

Preparing The File

Rather than starting with an existing typeface, we’ll begin with a simple hand-drawn sketch:

A rough sketch for the animation

A rough sketch for the animation (pardon my calligraphy skills!) (Large preview)

Now it’s time to redraw the sketch in a design tool. I used Figma, but you can use any app that supports SVG exports, such as Sketch, Adobe XD, or Adobe Illustrator.

Usually, I start with the Pen tool and roughly follow the sketch imported as a layer underneath:

Once done, I remove the sketch from the background and refine the paths until I’m happy with the result. No matter what tools you use, nor technique, the most important thing is to prepare the drawing as lines and to use just strokes, no fills.

These paths can be successfully animated with Path Animator as they are created with strokes

These paths can be successfully animated with Path Animator as they are created with strokes. (Large preview)

In this example, we have four such paths. The first is the letter “H”; the second is the three middle letters “ell”; and “o” is the third. The fourth path is the line of the exclamation mark.

The dot of “!” is an exception — it’s the only layer we will style with a fill, rather than a stroke. It will be animated in a different way than the other layers, without using Path Animator.

Note that all the paths we’re going to animate with Path Animator are open, except for the “o,” which is an ellipse. Although animating closed paths (such as ellipses or polygons) with Path Animator is utterly fine and doable, it’s worth making it an open path as well, because this is the easiest way to control exactly where the animation starts. For this example, I added a tiny gap in the ellipse just by the end of the letter “l” as that’s where you’d usually start writing “o” in handwriting.

A small gap in the letter ‘o’ controls the starting point of the animation

A small gap in the letter ‘o’ controls the starting point of the animation. (Large preview)

Before importing our layers to SVGator, it’s best to clean up the layers’ structure and rename them in a descriptive way. This will help you quickly find your way around your file once working in SVGator.

If you’d like to learn more about preparing your shapes for path animations, I would recommend you check out this tutorial by SVGator.

It’s worth preparing your layers carefully and thinking ahead as much as possible. At the time of writing, in SVGator you can’t reimport the file to an already existing animation. While animating, if you discover an issue that requires some change to the original file, you will have to import it into SVGator again as a new project and start working on your animation from scratch.

Creating An Animation

Once you’re happy with the structure and naming of your layers, import them to SVGator. Then add the first path to the timeline and apply Path Animator to it by choosing it from the Animators list or by pressing Shift + T.

To achieve a self-drawing effect, our goal is to turn the path’s stroke into a dashed line. The length of a dash and a gap should be equal to the length of the entire path. This allows us to cover the entire path with a gap to make it disappear. Once hidden, change stroke-dashoffset to the point where the entire path is covered by a dash.

SVGator makes it very convenient for us by automatically providing the length of the path. All we need to do is to copy it with a click, and paste it into the two parameters that SVGator requires: Dashes and Offset. Pasting the value in Dashes turns the stroke into a dashed line. You can’t see it straightaway as the first dash of the line covers the whole path. Setting the Offset will change stroke-dashoffset so the gap then covers the path.

Once done, let’s create an animation by adding a new keyframe further along the timeline. Bring Offset back to zero and… ta-da! You’ve just created a self-drawing letter animation.

Creating a self-writing text animation in SVGator: Part 1

There’s one little issue with our animation, though. The letter is animated — but back-to-front. That is, the animation starts at the wrong end of the path. There are, at least, a few ways to fix it. First, rather than animating the offset from a positive value to zero, we can start with a negative offset and bring it to zero. Unfortunately, this may not work as expected in some browsers (for example, Safari does not accept negative stroke offsets). While we wait for this bug to be fixed, let’s choose a different approach.

Let’s change the Dashes value so the path starts with a gap followed by a dash (by default, dashed lines always start with a dash). Then reverse the values of the Offset animation. This will animate the line in the opposite direction.

Reversing the direction of self-writing animation

Now that we’re done with “H” we can move on to animating all the other paths in the same way. Eventually, we finish by animating the dot of the exclamation mark. As it’s a circle with a fill, not an outline, we won’t use Path Animator. Instead, we use Scale Animator to the make dot pop in at the end of the animation.

Creating a self-writing text animation in SVGator: Part 2

Always remember to check the position of an element’s transform origin when playing with scale animations. In SVG, all elements have their transform origin in the top-left corner of the canvas by default. This often makes coding transform functions a very hard and tedious task. Fortunately, SVGator saves us from all this hassle by calculating all the transforms in relation to the object, rather than the canvas. By default, SVGator sets the transform origin of each element in its own top-left corner. You can change its position from the timeline, using a button next to the layer’s name.

Transform origin control in SVGator’s Timeline panel

Transform origin control in SVGator’s Timeline panel (Large preview)

Let’s add the final touch to the animation and adjust the timing functions. Timing functions define the speed over time of objects being animated, allowing us to manipulate their dynamics and make the animation look more natural.

In this case, we want to give the impression of the text being written by a single continuous movement of a hand. Therefore, I applied an Ease-in function to the first letter and an Ease-out function to the last letter, leaving the middle letters with a default Linear function. In SVGator, timing functions can be applied from the timeline, next to the Animator’s parameters:

Timing function control in SVGator’s Timeline panel

Timing function control in SVGator’s Timeline panel (Large preview)

After applying the same logic to the exclamation mark, our animation is done and ready to be exported!

Final result of the first example

Example #2: Animated Icon

Now let’s analyze a more UI-focused example. Here, we’re going to use SVGator to replicate a popular icon animation: turning a hamburger menu into a close button.

Final result of the second example
Final result of the second example (Large preview)

The goal of the animation is to smoothly transform the icon so the middle bar of the hamburger becomes a circle, and the surrounding bars cross each other creating a close icon.

Preparing The File

To better understand what we’re building and how to prepare a file for such an animation, it’s useful to start with a rough sketch representing the key states of the animation.

It’s helpful to plan your animation ahead and start with a sketch

It’s helpful to plan your animation ahead and start with a sketch. (Large preview)

Once we have a general idea of what our animation consists of, we can draw the shapes that will allow us to create it. Let’s start with the circle. As we’re going to use path animation, we need to create a path that covers the whole journey of the line, starting as a straight bar in the middle of the hamburger menu, and finishing as a circle around it.

Complete path of the middle bar animation turning into a circle

Complete path of the middle bar animation turning into a circle. (Large preview)

The other two bars of the menu icon have an easier task — we’re just going to rotate them and align to the centre of the circle. Once we combine all the shapes together we’re ready to export the file as SVG and import it to SVGator.

Our icon, ready to be animated in SVGator

Our icon, ready to be animated in SVGator. (Large preview)

Creating An Animation

Let’s start by adding the first shape to the timeline and applying Path Animator to it. For the initial state, we want only the horizontal line in the middle to be visible, while the rest of the path stays hidden. To achieve it, set the length of the dash to be equal to the length of the hamburger’s lines. This will make our straight middle line of the menu icon. To find the correct value, you can use the length of one of the other lines of the hamburger. You can copy it from the timeline or from the Properties panel in the right sidebar of the app.

Then set the length of the following gap to a value greater than the remaining length of the path so it becomes transparent.

Creating an icon animation in SVGator: Part 1

The initial state of our animation is now ready. What happens next is that we turn this line into a circle. To do that, two things need to happen simultaneously. First, we use Offset to move the line along the path. Second, we change the width of the dash to make the line longer and cover the entire circle.

Creating an icon animation in SVGator: Part 2

With the circle ready, let’s take care of the close icon. Just as before, we need to add two animations at the same time. First, we want the top line to lean down (45 degrees) and the bottom line to move up (-45 degrees) until they cross each other symmetrically. Second, we need to move the lines slightly to the right so they stay aligned with the circle.

As you might remember from the previous example, in SVGator, transform origins are located in the top-left corner by default. That’s very convenient to us as, in this case, that is exactly where we want them to be. All we need to do is to apply the correct rotation angles.

When it comes to aligning the lines with the circle, note that we don’t have to move them separately. Rather than adding Animators to both of the lines, we can add a group containing both of them to the timeline, and animate them together with a single Position Animator. That’s one of those moments when a nice, clean file structure pays off.

Creating an icon animation in SVGator: Part 3

Next thing to do is add a reverse animation that turns the close button back into a hamburger menu. To achieve that, we can basically follow the previous steps in reverse order. To speed things up a bit, copy and paste the existing keyframes on the timeline — that’s yet another improvement SVGator introduced in the past few months.

Reversing icon animation: back to the hamburger menu.

Once done, don’t forget to adjust the timing functions. Here, I’ve decided to go with an Ease-in-out effect on all elements. Our icon is ready for action.

Final result of the second example

Implementation

Even though implementing microinteractions goes far beyond the scope of this article, let me take a moment to briefly describe how such animation can be brought to life in a real project.

Illustrations and decorative animation are usually more straightforward. Quite often, you can use SVG files generated by SVGator out of the box. We can’t say that about our icon, though. We want the first part of the animation to be triggered when users click the button to open the menu drawer, and the second part of the animation to play once they click it for the second time to close the menu.

To do that, we need to slice our animation into a few separate pieces. We won’t discuss here the technical details of implementing such animation, as it depends very much on the environment and tech stack you’re working with; but let’s at least inspect the generated SVG file to extract the crucial animation states.

We’ll start by hiding the background and adjusting the size of the canvas to match the dimensions of the icon. In SVGator, we can do this at any time, and there are no restrictions to the size of our canvas. We can also edit the styles of the icon, such as color and width of the stroke, and test what your graphic will look like on a dark background using a switch in the top-right corner.

Preparing icon animation for development

When we’re ready, we can export the icon to SVG and open it in a text editor.

Elements you see in the body of the document are the components of your graphic. You should also notice that the first line of code is exceptionally long. Straight after the opening <svg> tag, there’s a <style> element with plenty of minified CSS inside. That’s where all the animation happens.

<svg viewBox="0 0 600 450" fill="none" xmlns="http://www.w3.org/2000/svg" id="el_vNqlglrYK"><style>@-webkit-keyframes kf_el_VqluQuq4la_an_DAlSHvvzUV… </style> <!-- a very long line of code that contains all the animations -->
<g id="el_SZQ_No_bd6">
<g id="el_BVAiy-eRZ3_an_biAmTPyDq" data-animator-group="true" data-animator-type="0"><g id="el_BVAiy-eRZ3">
<g id="el_Cnv4q4_Zb-_an_6WWQiIK_0" data-animator-group="true" data-animator-type="1"><path id="el_Cnv4q4_Zb-" d="M244 263H356" stroke-linecap="round"/></g>
<g id="el_aGYDsRE4sf_an_xRd24ELq3" data-animator-group="true" data-animator-type="1"><path id="el_aGYDsRE4sf" d="M244 187H356" stroke-linecap="round"/></g>
</g></g>
<path id="el_VqluQuq4la" d="M244 225H355.5C369 225 387.5 216.4 387.5 192C387.5 161.5 352 137 300 137C251.399 137 212 176.399 212 225C212 273.601 251.399 313 300 313C348.601 313 388 273.601 388 225C388 176.399 349.601 137 301 137" stroke-linecap="round"/>
</g>
</svg>

It’s really nice of SVGator to minify the code for us. However, we’ll have to undo it. Once the CSS code is written out in full (you can do this in your browser’s development tools, or in one of many online code formatters), you’ll see that it’s a long list of @keyframes followed by a list of id rules using the @keyframes in their animation properties.

The code may look unreadable (even when nicely formatted) but, rather, it’s very repetitive. Once you understand the underlying rule, following it is no longer that hard. First, we’ve got the @keyframes. Each animated element has its own @keyframes @-rule. They’re sorted in the same order as elements in SVGator. Therefore, in our case, the first @-rule applies to the middle bar of the hamburger icon, the second one to the top bar, and so on. The keyframes inside also match the order of keyframes created in SVGator:

@keyframes kf_el_VqluQuq4la_an_DAlSHvvzUV{ /* middle bar animation */
    0%{
        stroke-dasharray: 112, 2000; /* initial state */
    }
    25%{
        stroke-dasharray: 112, 2000;
    }
    50%{
        stroke-dasharray: 600, 2000; /* turns into a circle */
    }
    75%{
        stroke-dasharray: 600, 2000; /* back at initial state */
    }
    100%{
        stroke-dasharray: 112, 2000;
    }
}

All you need to do now is use these values from the keyframes to code your interaction. It’s still a lot of work up ahead, but thanks to SVGator the crucial part is already done.

What happens next is another story. However, if you’re curious to see an example of how this animation could work in practice, here’s a little CodePen for you:

See the Pen [Hamburger icon path animation](https://codepen.io/smashingmag/pen/ewNdJo) by Mikołaj.

See the Pen Hamburger icon path animation by Mikołaj.

The example is built with React and uses states to switch CSS classes and trigger transitions between the respective CSS values. Therefore, there’s no need for animation properties and @keyframes @-rules.

You can use a set of CSS custom priorities listed at the top of the SCSS code to control the styling of the icon as well as duration of the transitions.

Example #3: Animated Illustration

For the third and final example of this article, we’re going to create an animated illustration of an atom with orbiting particles.

Final result of the third example
Final result of the third example (Large preview)

Dashed Lines And Dotted Lines

In the two previous examples, we’ve taken advantage of dashed SVG paths. Dashed lines are cool but did you know that SVG also supports dotted lines? A dotted line in SVG is no more, no less than a dashed line with round caps, and the length of the dashes is equal to zero.

If we can have a path with lots of dots, who said we can’t have a path with a single dot? Animate the stroke’s offset and you’ve got an animation of a circle following any path you want. In this example, the path will be an ellipse, and a circle will represent an orbiting particle.

Preparing The File

As no SVG element can have two strokes at the same time, for each of the particles we need two ellipses. The first of them will be an orbit, the second will be for the particle. Multiply it by three, combine with another circle in the middle for the nucleus and here it is: a simple atom illustration, ready to be animated.

Our illustration, ready to be imported to SVGator.

Our illustration, ready to be imported to SVGator. (Large preview)

Note: At the time of writing, creating dotted lines in Figma is a hard task. Not only can’t you set a dash’s length to zero, but neither can you create a gap between the dashes long enough to cover the entire path. And when it comes to export, all your settings are gone anyway. Nonetheless, if you’re working with Figma, don’t get discouraged. We’ll fix all of these issues easily in SVGator. And if you’re working in Sketch, Illustrator, or similar, you shouldn’t experience these problems at all.

Creating An Animation

Once you have imported the SVG file into SVGator, we’ll start by fixing the dotted lines. As mentioned above, to achieve a perfect circular dot, we need a dash length set to zero. We also set the length of the gap equal to the length of the path (copied from above). This will make our dot the only one visible.

Creating an illustration animation in SVGator: Part 1

With all three particles ready, we can add new keyframes and animate the offsets by one full length of the path. Finally, we play a bit with the Offset values to make the dots’ positions feel a bit more random.

Creating an illustration animation in SVGator: Part 2.

Remember that if you find your animation too fast or too slow you can always change its duration in the settings. Right now, SVGator supports animations up to 30 seconds long.

As a final touch, I’ve added a bit of a bounce to the whole graphic.

Creating an illustration animation in SVGator: Part 3

Now the animation is ready and can be used, perhaps as a loader graphic.

Final result of the third example

A Quick Word On Accessibility

As you can see, there’s hardly a limit to what can be achieved with SVG. And path animations are a very important part of its tool kit. But as a wise man once said, with great power comes great responsibility. Please refrain from overusing them. Animation can add life to your product and delight users, but too many animations can ruin the whole experience as well.

Also, consider allowing users to disable animations. People suffering from motion sickness and other related conditions will find such an option very helpful.

Conclusion

That’s it for today. I hope you enjoyed this journey through the possibilities of path animations. To try them out yourself, just visit SVGator’s website where you can also learn about its other features and pricing. If you have any remarks or questions, please don’t hesitate to add them in the comments. And stay tuned for the next updates about SVGator — there are lots of other amazing new features already on the way!

Further Reading

Useful Resources

Smashing Editorial
(og, yk, il)

Source: Smashing Magazine, Unleash The Power Of Path Animations With SVGator

Building A Component Library Using Figma

dreamt up by webguru in Uncategorized | Comments Off on Building A Component Library Using Figma

Building A Component Library Using Figma

Building A Component Library Using Figma

Emiliano Cicero



I’ve been working on the creation and maintenance of the main library of our design system, Lexicon. We used the Sketch app for the first year and then we moved to Figma where the library management was different in certain aspects, making the change quite challenging for us.

To be honest, as with any library construction, it requires time, effort, and planning, but it is worth the effort because it will help with providing detailed components for your team. It will also help increase the overall design consistency and will make the maintenance easier in the long run. I hope the tips that I’ll provide in this article will make the process smoother for you as well.

This article will outline the steps needed for building a component library with Figma, by using styles and a master component. (A master component will allow you to apply multiple changes all at once.) I’ll also cover in detail the components’ organization and will give you a possible solution if you have a large number of icons in the library.

Note: To make it easier to use, update and maintain, we found that it is best to use a separate Figma file for the library and then publish it as a team ‘library’ instead of publishing the components individually.

Getting Started

This guide was created from a designer’s perspective, and if you have at least some basic knowledge of Figma (or Sketch), it should help you get started with creating, organizing and maintaining a component library for your design team.

If you are new to Figma, check the following tutorials before proceeding with the article:

Requirements

Before starting, there are some requirements that we have to cover to define the styles for the library.

Typography Scale

The first step to do is to define the typography scale; it helps to focus on how the text size and line height grow in your system, allowing you to define the visual hierarchy of your texts.

a scale of text in different sizes, from small to big

Typography scales are useful to improve the hierarchy of the elements, as managing the sizes and weights of the fonts can really guide the user through the content. (Large preview)

The type of scale depends on what you’re designing. It’s common to use a bigger ratio for website designs and a smaller ratio when designing digital products.

The reason for this is behind the design’s goal — a website is usually designed to communicate and convert so it gives you one or two direct actions. It’s easier in that context to have 36px for a main title, 24px for a secondary title, and 16px for a description text.

Related resource: “8-Point Grid: Typography On The Web” by Elliot Dahl.

On the other hand, digital products or services are designed to provide a solution to a specific problem, usually with multiple actions and possible flows. It means more information, more content and more components, all in the same space.

For this case, I personally find it rare to use more than 24px for texts. It’s more common to use small sizes for components — usually from 12 to 18 pixels depending on the text’s importance.

If you’re designing a digital product, it is useful to talk to the developers first. It’s easier to maintain a typography scale based on EM/REM more than actual pixels. The creation of a rule to convert pixels into EM/REM multiples is always recommended.

Related resource: “Defining A Modular Type Scale For Web UI” by Kelly Dern.

Color Scheme

Second, we need to define the color scheme. I think it’s best if you to divide this task into two parts.

  1. First, you need to define the main colors of the system. I recommend keeping it simple and using a maximum of four or five colors (including validation colors) because the more colors you include here, the more stuff you’ll have to maintain in the future.
  2. Next, generate more color values using the Sass functions such as “Lighten” and “Darken” — this works really well for user interfaces. The main benefit of this technique is to use the same hue for the different variants and obtain a mathematical rule that can be automated in the code. You can’t do it directly with Figma, but any Sass color generator will work just fine — for example, SassMe by Jim Nielsen. I like to increase the functions by 1% to have more color selection.

2 different sets of colors with different tones

Once you have your main colors (in our case, blue and grey), you can generate gradients using lighten and darken functions. (Large preview)

Tip: In order to be able to apply future changes without having to rename the variables, avoid using the color as part of the color name. E.g., instead of $blue, use $primary.

Recommended reading: What Do You Name Color Variables?” by Chris Coyier

Figma Styles

Once we have the typography scale and the color scheme set, we can use them to define the Library styles.

This is the first actual step into the library creation. This feature lets you use a single set of properties in multiple elements.

2 shapes showing a color palette and a text to represent the possible styles

Styles are the way to control all the basic details in your library. (Large preview)

Concrete Example

Let’s say you define your brand color as a style, it’s a soft-blue and you originally apply it to 500 different elements. If it is later decided that you need to change it to a darker blue with more contrast, thanks to the styles you can update all the 500 styled elements at once, so you won’t have to do it manually, element by element.

We can define styles for the following:

If you have variations of the same style, to make it easier to find them later, you can name the single styles and arrange them inside the panel as groups. To do so, just use this formula:

Group Name/Style Name

I’ve included a suggestion of how to name texts and colors styles below.

Text Styles

Properties that you can define within a text style:

  • Font size
  • Font weight
  • Line-height
  • Letter spacing

Tip: Figma drastically reduces the number of styles that we need to define in the library, as alignments and colors are independent of the style. You can combine a text style with a color style in the same text element.

4 shapes with text inside used as examples of different text styles

You can apply all the typography scale we’ve seen before as text styles. (Large preview)

Text Styles Naming

I recommend using a naming rule such as “Size/Weight”
(eg: 16/Regular, 16/SemiBold, 16/Bold).

Figma only allows one level of indentation, if you need to include the font you can always add a prefix before the size:
FontFamily Size/Weight or FF Size/Weight
*(eg: SourceSansPro 16/Regular or SSP 16/Regular).*

Color Styles

The color style uses its hex value (#FFF) and the opacity as properties.

Tip: Figma allows you to set a color style for the fill and a different one for the border within the same element, making them independent of each other.

4 shapes with colors inside, used as examples of different color styles

You can apply color styles to fills, borders, backgrounds, and texts. (Large preview)

Color Styles Naming

For a better organization I recommend using this rule “Color/Variant”. We named our color styles using “Primary/Default” for the starter color, “Primary/L1”, “Primary/L2” for lighten variants, and “Primary/D1”, “Primary/D2” for darken variants.

Effects

When designing an interface you might also need to create elements that use some effects such as drop shadows (the drag&drop could be an example of a pattern that uses drop shadows effects). To have control over these graphic details, you can include effect styles such as shadows or layer blurs to the library, and also divide them by groups if necessary.

2 shapes similar to paper, one above the other one to show the shadow effect

Define shadows and blurs to manage special interaction effects such as drag-n-drop. (Large preview)
Grids

To provide something very useful for your team, include the grid styles. You can define the 8px grid, 12 columns grid, flexible grids so your team won’t need to recreate them.

12 columns to represent the grid styles

There’s no need to memorize the grid sizes anymore. (Large preview)

Tip: Taking advantage of this feature, you can provide all the different breakpoints as ‘grid styles’.

Master Component

Figma lets you generate multiple instances of the same component and update them through a single master component. It’s easier than you might think, you can start with some small elements and then use them to evolve your library.

a single group of three shapes that shows how you can get seven different results by hiding some of the shapes

One master component to rule them all! (Large preview)

To explain this workflow better, I will use one of the basic components all the libraries have: the buttons.

Buttons!

Every system has different types of buttons to represent the importance of the actions. You can start having both primary and secondary buttons with only texts and one size, but the reality is that you’ll probably end up having to maintain something like this:

  • 2 color types (Primary | Secondary)
  • 2 sizes of buttons (Regular | Small)
  • 4 content types (Text Only | Icon Only | Text + Icon right | Icon Left + Text)
  • 5 states (Default | Hover | Active | Disabled | Focus)

This would give us up to 88 different components to maintain only with the set of buttons mentioned above!

a screenshot with a total of 88 different button components

Thanks to how Figma is built, you can easily manage a lot of button instances all at once. (Large preview)

Let’s Start Step By Step

The first step is to include all the variations together in the same place. For the buttons we’re going to use:

  • A single shape for the background of the button so that we can then place the color styles for the fill and the border;
  • The single text that will have both text and color styles;
  • Three icon components (positioned to the right, center and left) filled in with the color style (you will be able to easily swap the icons).

a group of divided elements: a rectangle shape, a button text and 3 icons

A shape, a text, and an icon walk into a Figma bar… (Large preview)

The second step is to create the master component (use the shortcut Cmd + Alt + K on Mac, or Ctrl + Alt + K on Windows) with all of the variations as instances. I suggest using a different and neutral style for the elements inside the master component and use the real styles on the sub-components, this trick will help the team use only sub-components.

You can see the visual difference between a master component and a sub-component in the next step:

A group of elements centered in the same space, one over the other one

The more elements, the more instances you can control. (Large preview)

In the third step you need to duplicate the master component to generate an instance, now you can use that instance to create a sub-component, and from now on every change you make to the master component will also change the sub-component you’ve created.

You can now start applying the different styles we’ve seen before to the elements inside the sub-component and, of course, you can hide the elements you don’t need in that sub-component.

An example showing how 8 different buttons can be generated from 1 single component

Thanks to the color styles you can generate different components using the same shape. In this example, primary and secondary styles are generated from the same master component. (Large preview)

Text Alignment

As I’ve shown you in the styles, the alignments are independent of the style. So if you want to change the text alignment, just select it by hitting Cmd/Ctrl and changing it. Left, center or right: it will all work and you can define different sub-components as I did with the buttons.

Tip: To help you work faster without having to find the exact element layer, if you delete an element inside the instance, it will hide the element instead of actually deleting it.

Component Organization

If you’re coming from Sketch, you could be having trouble with the organization of the components in Figma as there are a few key differences between these two tools. This is a brief guide to help you organize the components well so that the instance menu doesn’t negatively affect your team’s effectiveness.

showing the instance menu open with more unordered sub-menus

As you can see here, our library had so many sub-menus that as a result the navigation was going off the screen on MacBooks, that was a big problem for our library. We were able to find a workaround for this issue. (Large preview)

showing the improvements on the instance menu open with ordered sub-menus

This was the result after improving the library order following the rules for pages and frames, now it’s way more usable and organized for our teams. (Large preview)

We’ve all been there, the solution is easier than you think!

Here’s what I have learned about how to organize the components.

Figma Naming

While in Sketch all the organization depends only on the single component name, in Figma it depends on the Page name, the Frame name, and the single Component name — exactly in that order.

In order to provide a well-organized library, you need to think of it as a visual organization. As long as you respect the order, you can customize the naming to fit your needs.

Here’s how I’ve divided it:

  • File Name = Library Name (e.g. Lexicon);
  • Page Name = Component Group (e.g. Cards);
  • Frame Name = Component Type (e.g. Image Card, User Card, Folder Card, etc);
  • Component Name = Component State (e.g. Default, Hover, Active, Selected, etc).

Showing the main page named ‘Cards’, the frame named ‘Image Card’ and the layer named ‘Card Hover’

This structure is the equivalent to the Sketch naming of ‘Cards/Image Card/Card Hover’. (Large preview)

Adding Indentation Levels

When creating the Lexicon library, I found that I actually needed more than three levels of indentation for some of the components, such as the buttons that we saw before.

For these cases, you can extend the naming using the same method as Sketch for nested symbols (using the slashes in the component name, e.g. “Component/Sub-Component”), under the condition that you do it only after the third level of indentation, respecting the structural order of the first three levels as explained in the previous point.

This is how I organized our buttons:

  • Page name = Component Group (e.g. Buttons);
  • Frame name = Component Size (e.g. Regular or Small);
  • Component name = Style/Type/State (e.g. Primary/Text/Hover).

Showing the main page named 'Buttons', the frame named 'Buttons Regular' and the layer named 'Primary/Text/Button Hover' as example of the possible structures.

This structure is the equivalent to the Sketch naming of ‘*Buttons/Buttons Regular/Primary/Text/Button Hover*’. (Large preview)

Tip: You can include the component name (or a prefix of the name) in the last level, this will help your team to better identify the layers when they import the components from the library.

Icons Organization

Organizing the icons in Figma can be challenging when including a large number of icons.

As opposed to Sketch which uses a scroll functionality, Figma uses the sub-menus to divide the components. The problem is that if you have a large number of icons grouped in sub-menus, at some point they might go off screen (my experience with Figma on a MacBook Pro).

Showing the instance menu for the icons with a single scrollable sub-menu.

An example of how the components are organized inside a single scrollable sub-menu. (Large preview)

Showing the instance menu for the icons with more than 10 sub-menus and cover all the screen.

As you can see, using a Macbook Pro the result was the menus going outside the screen. (Large preview)

Here are two possible solutions:

  • Solution 1
    Create a page named “Icons” and then a frame for each letter of the alphabet, then place each icon in the frame based on the icon’s name. For example, if you have an icon named “Plus”, then it will go in the “P” frame.
  • Solution 2
    Create a page named “Icons” and then divide by frames based on the icon categories. For example, if you have icons that represent a boat, a car, and a motorcycle, you can place them inside a frame named “vehicles”.

The instance menu is open, showing the alphabetical order of the icons in Figma.

I, personally, applied solution 1. As you can see in this example, we had a huge number of icons so this was the better fit. (Large preview)

Conclusion

Now that you know what’s exactly behind a team’s library construction in Figma, you can start building one yourself! Figma has a free subscription plan that will help you to get started and experiment with this methodology in a single file (however, if you want to share a team library, you will need to subscribe to the “Professional” option).

Try it, create and organize some advanced components, and then present the library to your team members so you could amaze them — or possibly convince them to add Figma to their toolset.

Finally, let me mention that here in Liferay, we love open-source projects and so we’re sharing a copy of our Lexicon library along with some other resources. You can use the Lexicon library components and the other resources for free, and your feedback is always welcome (including as Figma comments, if you prefer).

The Lexicon logo, it’s similar to a hexagon and a fingerprint together.

Lexicon is the design language of Liferay, used to provide a Design System and a Figma Library for the different product teams. (Large preview)

If you have questions or need help with your first component library in Figma, ask me in the comments below, or drop me a line on Twitter.

Further Resources

Smashing Editorial
(mb, yk, il)

Source: Smashing Magazine, Building A Component Library Using Figma

Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement

Anselm Hannemann



Last week I read about the web turning into a dark forest. This made me think, and I’m convinced that there’s hope in the dark forest. Let’s stay positive about how we can contribute to making the web a better place and stick to the principle that each one of us is able to make an impact with small actions. Whether it’s you adding Webmentions, removing tracking scripts from a website, recycling plastic, picking up trash from the street to throw it into a bin, or cycling instead of driving to work for a week, we all can make things better for ourselves and the people around us. We just have to do it.

News

  • Safari went ahead by introducing their new Intelligent Tracking Protection and making it the new default. Now Firefox followed, enabling their Enhanced Tracking Protection by default, too.
  • Chrome 75 brings support for the Web Share API which is already implemented in Safari. Latency on canvas contexts has also been improved.
  • The Safari Technology Preview Release 84 introduced Safari 13 features: warnings for weak passwords, dark mode support for iOS, support for aborting Fetch requests, FIDO2-compliant USB security keys with the Web Authentication standard, support for “Sign In with Apple” (for Safari and WKWebView). The Visual Viewport API, ApplePay in WKWebView, screen sharing via WebRTC, and an API for loading ES6 modules are also supported from now on.
  • There’s an important update to Apple’s AppStore review guidelines that requires developers to offer “Sign In with Apple” in their apps in case they support third-party sign-in once the service is available to the public later this year.
  • Firefox 67 is out now with the Dark Mode CSS media query, WebRender, and side-by-side profiles that allow you to run multiple instances parallelly. Furthermore, enhanced privacy controls are built in against crypto miners and fingerprinting, as well as support for AV1 on Windows, Linux, and macOS for videos, String.prototype.matchAll(), and dynamic imports.

General

  • The web relies on so many open-source projects, and, yet, here’s what it looks like to live off an open-source budget. Most authors are below the poverty line, forced to live in cheaper countries or not able to make a living at all from their public service of providing reliable, open software for others who then use it commercially.
  • We all know that annoying client who ignores your knowledge and gets creative on their own. As a developer, Holger Bartel experienced it dozens of times; now he found himself in the same position, having ordered a fine drink and then messed it up.

UI/UX

  • With so many dark patterns built into the software and websites we use daily, Fabricio Teixeira and Caio Braga call for a tech diet for users.

Facebook, Instagram, Twitter, and Netflix Nutrition Facts.

“Dark patterns try to manipulate users to engage further, deeper, or longer on a site or app. The world needs a tech diet, and designers can help make it a reality. (Image credit)

CSS

  • The CSS feature for truncating multi-line text has been implemented in Firefox. -webkit-line-clamp: 3;, for example, will truncate text at the end of line three.

Security

Privacy

  • Anil Dash tries to find an answer to the question if we can trust a company in 2019.
  • Kevin Litman-Navarro analyzed over 150 privacy policies and shares his findings in a visual story. Not only does it take about 15 minutes on average to read a privacy policy, but most of them require a college degree or even professional career to understand them.
  • Our view on privacy hasn’t changed much since the 18th century, but the circumstances are different today: Companies have a wild appetite to store more and more data about more people in a central place — data that was once exclusively accessible by state authorities. We should redefine what privacy, personal data, and consent are, as Maciej Cegłowski argues in “The new wilderness.”
  • The people at WebKit are very active when it comes to developing clever solutions to protect users without compromising too much on usability and keeping the interests of publishers and vendors in mind at the same time. Now they introduced “privacy preserving ad click attribution for the web,” a technique that limits the data which is sent to third parties while still providing useful attribution metrics to advertisers.

An overview of how hard privacy policies are to read and how much time it requires to do so. Most privacy policies are college and professional career level. Only one is comprehensible on a Middle School level.

Most privacy policies on the web are harder to read than Stephen Hawking’s “A Brief History Of Time,” as Kevin Litman-Navarro found out by examining 150 privacy policies. (Image credit)

Accessibility

  • Brad Frost describes a great way to reduce motion on websites (of animated GIFs, for example), using the picture element and its media query feature.

Tooling

  • The IP Geolocation API is an open-source real-time IP to Geolocation JSON API with detailed countries data integration that is based on the Maxmind Geolite2 database.
  • Pascal Landau wrote a step-by-step tutorial on how to build a Docker development setup for PHP projects, and yes, it contains everything you might need to apply it to your own projects.

Work & Life

  • Roman Imankulov from Doist shares insights into decision-making in a flat organization.
  • As a society, we’re overworked, have too many belongings, yet crave for more, and companies only exist to grow indefinitely. This is how we kick-started climate change in the past century and this is how we got more people than ever into burn-outs, depressions, and various other health issues, including work-related suicides. Philipp Frey has a bold theory that breaks with our current system: A research by Nässén and Larsson suggests that a 1% decrease in working hours could lead to a 0.8% decrease in GHG emissions. Taking it further, the paper suggests that working 12 hours a week would allow us to easily achieve climate goals, if we’re also changing the economy to not entirely focus on growth anymore. An interesting study as it explores new ways of working, living, and consuming.
  • Leo Babauta shares a method that helps you acknowledge when you’re tired. It’s hard to accept, but we are humans and not machines, so there are times when we feel tired and our batteries are low. The best way to recover is realizing that this is happening and focusing on it to regain some energy.
  • Many of us are trying to achieve some minutes or hours of “deep work” a day. Fadeke Adegbuyi’s “The Complete Guide to Deep Work” shares valuable tips to master it.

Going Beyond…

  • People who live a “zero waste” life are often seen as extreme, but this is only one point of view. Here’s the other side where one of the “extreme” persons reminds us that it used to be normal to go to a farmer’s market to buy things that aren’t packed in plastic, to ride a bike, and drink water from a public fountain. Instead, our consumerism has become quite extreme and needs to change if we want to survive and stay healthy.
  • Sweden wants to become climate neutral by 2045, and now they presented an interesting visualization of the plan. It’s designed to help policymakers identify and fill in gaps to ensure that the goal will be achieved. The visualization is open to the public, so anyone can hold the government accountable.
  • Everybody loves them, many have them: AirPods. However, they are an environmental disaster, as this article shows.
  • The North Face tricking Wikipedia is advertising’s dark side.
  • The New York Times published a guide which helps us understand our impact on climate change based on the food we eat. This is not about going vegan but how changing eating habits can make a difference, both to the environment and our own health.
Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 6/2019: Rethinking Privacy And User Engagement