Collective #536

dreamt up by webguru in Uncategorized | Comments Off on Collective #536

C536_livewire

Livewire

Livewire is a full-stack framework for Laravel that makes building dynamic front-ends as simple as writing vanilla PHP.

Check it out


Divi

Our Sponsor

Divi: The Powerful Visual Page Builder

Divi is a revolutionary WordPress theme and visual page builder for WordPress. With Divi, you can build your website visually. Add, arrange and design content and watch everything happen instantly right before your eyes.

Try it


C536_liquidfun

Liquidfun

An absolutely amazing experiment by Lorenzo Cadamuro: a liquid simulation that reacts to the shaking of the window. More info in his tweet.

Check it out



C536_ciao

Ciao

Ciao checks HTTP(S) URL endpoints for a HTTP status code (or errors on the lower TCP stack) and sends an e-mail on status change.

Check it out



C536_fm

Poolside FM

Inspired by a Mac interface from the ’90s, this fun website streams a continuous flow of upbeat tracks and hilarious VHS visuals.

Check it out







C536_butts

buttsss

A fun collection of beautiful round butt illustrations made by Pablo Stanley.

Check it out



C536_jexcel

jExcel

jExcel is a lightweight vanilla JavaScript plugin to create web-based interactive tables and spreadsheets compatible with Excel or any other spreadsheet software.

Check it out


C536_trend

Trennd

Trennd continually monitors the web for interesting keywords and topics, and then classifies them using Google Trends data.

Check it out



C536_nodes

Nodes: Our Story

The story behind Nodes, a JavaScript-based 2D canvas for computational thinking soon to be released.

Read it





Collective #536 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #536

I Used The Web For A Day On A 50 MB Budget

dreamt up by webguru in Uncategorized | Comments Off on I Used The Web For A Day On A 50 MB Budget

I Used The Web For A Day On A 50 MB Budget

I Used The Web For A Day On A 50 MB Budget

Chris Ashton



This article is part of a series in which I attempt to use the web under various constraints, representing a given demographic of user. I hope to raise the profile of difficulties faced by real people, which are avoidable if we design and develop in a way that is sympathetic to their needs.

Last time, I navigated the web for a day using Internet Explorer 8. This time, I browsed the web for a day on a 50 MB budget.

Why 50 MB?

Many of us are lucky enough to be on mobile plans which allow several gigabytes of data transfer per month. Failing that, we are usually able to connect to home or public WiFi networks that are on fast broadband connections and have effectively unlimited data.

But there are parts of the world where mobile data is prohibitively expensive, and where there is little or no broadband infrastructure.

People often buy data packages of just tens of megabytes at a time, making a gigabyte a relatively large and therefore expensive amount of data to buy.
— Dan Howdle, consumer telecoms analyst at Cable.co.uk

Just how expensive are we talking?

The Cost Of Mobile Data

A 2018 study by cable.co.uk found that Zimbabwe was the most expensive country in the world for mobile data, where 1 GB cost an average of $75.20, ranging from $12.50 to $138.46. The enormous range in price is due to smaller amounts of data being very expensive, getting proportionally cheaper the bigger the data plan you commit to. You can read the study methodology for more information.

Zimbabwe is by no means a one-off. Equatorial Guinea, Saint Helena and the Falkland Islands are next in line, with 1 GB of data costing $65.83, $55.47 and $47.39 respectively. These countries generally have a combination of poor technical infrastructure and low adoption, meaning data is both costly to deliver and doesn’t have the economy of scale to drive costs down.

Data is expensive in parts of Europe too. A gigabyte of data in Greece will set you back $32.71; in Switzerland, $20.22. For comparison, the same amount of data costs $6.66 in the UK, or $12.37 in the USA. On the other end of the scale, India is the cheapest place in the world for data, at an average cost of $0.26. Kyrgyzstan, Kazakhstan and Ukraine follow at $0.27, $0.49 and $0.51 per GB respectively.

The speed of mobile networks, too, varies considerably between countries. Perhaps surprisingly, users experience faster speeds over a mobile network than WiFi in at least 30 countries worldwide, including Australia and France. South Korea has the fastest mobile download speed, averaging 52.4 Mbps, but Iraq has the slowest, averaging 1.6 Mbps download and 0.7 Mbps upload. The USA ranks 40th in the world for mobile download speeds, at around 34 Mbps, and is at risk of falling further behind as the world moves towards 5G.

As for mobile network connection type, 84.7% of user connections in the UK are on 4G, compared to 93% in the USA, and 97.5% in South Korea. This compares with less than 50% in Uzbekistan and less than 60% in Algeria, Ecuador, Nepal and Iraq.

The Cost Of Broadband Data

Meanwhile, a study of the cost of broadband in 2018 shows that a broadband connection in Niger costs $263 ‘per megabit per month’. This metric is a little difficult to comprehend, so here’s an example: if the average cost of broadband packages in a country is $22, and the average download speed offered by the packages is 10 Mbps, then the cost ‘per megabit per month’  would be $2.20.

It’s an interesting metric, and one that acknowledges that broadband speed is as important a factor as the data cap. A cost of $263 suggests a combination of extremely slow and extremely expensive broadband. For reference, the metric is $1.19 in the UK and $1.26 in the USA.

What’s perhaps easier to comprehend is the average cost of a broadband package. Note that this study was looking for the cheapest broadband packages on offer, ignoring whether or not these packages had a data cap, so provides a useful ballpark figure rather than the cost of data per se.

On package cost alone, Mauritania has the most expensive broadband in the world, at an average of $768.16 (a range of $307.26 to $1,368.72). This enormous cost includes building physical lines to the property, since few already exist in Mauritania. At 0.7 Mbps, Mauritania also has one of the slowest broadband networks in the world.

Taiwan has the fastest broadband in the world, at a mean speed of 85 Mbps. Yemen has the slowest, at 0.38 Mbps. But even countries with good established broadband infrastructure have so-called ‘not-spots’. The United Kingdom is ranked 34th out of 207 countries for broadband speed, but in July 2019 there was still a school in the UK without broadband.

The average cost of a broadband package in the UK is $39.58, and in the USA is $67.69. The cheapest average in the world is Ukraine’s, at just $5, although the cheapest broadband deal of them all was found in Kyrgystan ($1.27 — against the country average of $108.22).

Zimbabwe was the most costly country for mobile data, and the statistics aren’t much better for its broadband, with an average cost of $128.71 and a ‘per megabit per month’ cost of $6.89.

Absolute Cost vs Cost In Real Terms

All of the costs outlined so far are the absolute costs in USD, based on the exchange rates at the time of the study. These costs have not been accounted for cost of living, meaning that for many countries the cost is actually far higher in real terms.

I’m going to limit my browsing today to 50 MB, which in Zimbabwe would cost around $3.67 on a mobile data tariff. That may not sound like much, but teachers in Zimbabwe were striking this year because their salaries had fallen to just $2.50 a day.

For comparison, $3.67 is around half the $7.25 minimum wage in the USA. As a Zimbabwean, I’d have to work for around a day and a half to earn the money to buy this 50MB data, compared to just half an hour in the USA. It’s not easy to compare cost of living between countries, but on wages alone the $3.67 cost of 50 MB of data in Zimbabwe would feel like $52 to an American on minimum wage.

Setting Up The Experiment

I launched Chrome and opened the dev tools, where I throttled the network to a slow 3G connection. I wanted to simulate a slow connection like those experienced by users in Uzbekistan, to see what kind of experience websites would give me. I also throttled my CPU to simulate being on a lower end device.

I opted to throttle my network to Slow 3G and my CPU to 6x slowdown. (Large preview)

I installed ModHeader and set the ‘Save-Data’ header to let websites know I want to minimise my data usage. This is also the header set by Chrome for Android’s ‘Lite mode’, which I’ll cover in more detail later.

I downloaded TripMode; an application for Mac which gives you control over which apps on your Mac can access the internet. Any other application’s internet access is automatically blocked.

Screenshot of Tripmode settings; Chrome is enabled, Mail is disabled
You can enable/disable individual apps from connecting to the internet with TripMode. I enabled Chrome. (Large preview)

How far do I predict my 50 MB budget will take me? With the average weight of a web page being almost 1.7 MB, that suggests I’ve got around 29 pages in my budget, although probably a few more than that if I’m able to stay on the same sites and leverage browser caching.

Throughout the experiment I will suggest performance tips to speed up the first contentful paint and perceived loading time of the page. Some of these tips may not affect the amount of data transferred directly, but do generally involve deferring the download of less important resources, which on slow connections may mean the resources are never downloaded and data is saved.

The Experiment

Without any further ado, I loaded google.com, using 402 KB of my budget and spending $0.03 (around 1% of my Zimbabwe budget).

402 KB transferred, 1.1 MB resources, 24 requests

402 KB transferred, 1.1 MB resources, 24 requests. (Large preview)

All in all, not a bad page size, but I wondered where those 24 network requests were coming from and whether or not the page could be made any lighter.

Google Homepage — DOM

Chrome devtools screenshot of the DOM, where I’ve expanded one inline style tag. (Large preview)

Looking at the page markup, there are no external stylesheets — all of the CSS is inline.

Performance Tip #1: Inline Critical CSS

This is good for performance as it saves the browser having to make an additional network request in order to fetch an external stylesheet, so the styles can be parsed and applied immediately for the first contentful paint. There’s a trade-off to be made here, as external stylesheets can be cached but inline ones cannot (unless you get clever with JavaScript).

The general advice is for your critical styles (anything above-the-fold) to be inline, and for the rest of your styling to be external and loaded asynchronously. Asynchronous loading of CSS can be achieved in one remarkably clever line of HTML:

<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">

The devtools show a prettified version of the DOM. If you want to see what was actually downloaded to the browser, switch to the Sources tab and find the document.

A wall of minified code.

Switching to Sources and finding the index shows the ‘raw’ HTML that was delivered to the browser. What a mess! (Large preview)

You can see there is a LOT of inline JavaScript here. It’s worth noting that it has been uglified rather than merely minified.

Performance Tip #2: Minify And Uglify Your Assets

Minification removes unnecessary spaces and characters, but uglification actually ‘mangles’ the code to be shorter. The tell-tale sign is that the code contains short, machine-generated variable names rather than untouched source code. This is good as it means the script is smaller and quicker to download.

Even so, inline scripts look to be roughly 120 KB of the 210 KB page resource (about half the 60 KB gzipped size). In addition, there are five external JavaScript files amounting to 291 KB of the 402 KB downloaded:

Network tab of DevTools showing the external javascript files

Five external JavaScript files in the Network tab of the devtools. (Large preview)

This means that JavaScript accounts for about 80 percent of the overall page weight.

This isn’t useless JavaScript; Google has to have some in order to display suggestions as you type. But I suspect a lot of it is tracking code and advertising setup.

For comparison, I disabled JavaScript and reloaded the page:

DevTools showing only 5 network requests

The disabled JS version of Google search was only 102 KB and had just 5 network requests. (Large preview)

The JS-disabled version of Google search is just 102 KB, as opposed to 402 KB. Although Google can’t provide autosuggestions under these conditions, the site is still functional, and I’ve just cut my data usage down to a quarter of what it was. If I really did have to limit my data usage in the long term, one of the first things I’d do is disable JavaScript. It’s not as bad as it sounds.

Performance Tip #3: Less Is More

Inlining, uglifying and minifying assets is all well and good, but the best performance comes from not sending down the assets in the first place.

  • Before adding any new features, do you have a performance budget in place?
  • Before adding JavaScript to your site, can your feature be accomplished using plain HTML? (For example, HTML5 form validation).
  • Before pulling a large JavaScript or CSS library into your application, use something like bundlephobia.com to measure how big it is. Is the convenience worth the weight? Can you accomplish the same thing using vanilla code at a much smaller data size?

Analysing The Resource Info

There’s a lot to unpack here, so let’s get cracking. I’ve only got 50 MB to play with, so I’m going to milk every bit of this page load. Settle in for a short Chrome Devtools tutorial.

402 KB transferred, but 1.1 MB of resources: what does that actually mean?

It means 402 KB of content was actually downloaded, but in its compressed form (using a compression algorithm such as gzip or brotli). The browser then had to do some work to unpack it into something meaningful. The total size of the unpacked data is 1.1 MB.

This unpacking isn’t free — there are a few milliseconds of overhead in decompressing the resources. But that’s a negligible overhead compared to sending 1.1MB down the wire.

Performance Tip #4: Compress Text-based Assets

As a general rule, always compress your assets, using something like gzip. But don’t use compression on your images and other binary files — you should optimize these in advance at source. Compression could actually end up making them bigger.

And, if you can, avoid compressing files that are 1500 bytes or smaller. The smallest TCP packet size is 1500 bytes, so by compressing to, say, 800 bytes, you save nothing, as it’s still transmitted in the same byte packet. Again, the cost is negligible, but wastes some compression CPU time on the server and decompression CPU time on the client.

Now back to the Network tab in Chrome: let’s dig into those priorities. Notice that resources have priority “Highest” to “Lowest” — these are the browser’s best guess as to what are the more important resources to download. The higher the priority, the sooner the browser will try to download the asset.

Performance Tip #5: Give Resource Hints To The Browser

The browser will guess at what the highest priority assets are, but you can provide a resource hint using the <link rel="preload"> tag, instructing the browser to download the asset as soon as possible. It’s a good idea to preload fonts, logos and anything else that appears above the fold.

Let’s talk about caching. I’m going to hold ALT and right-click to change my column headers to unlock some more juicy information. We’re going to check out Cache-Control.

Screenshot showing how to display cache-control information
There are lots of interesting fields tucked away behind ALT. (Large preview)

Cache-Control denotes whether or not a resource can be cached, how long it can be cached for, and what rules it should follow around revalidating. Setting proper cache values is crucial to keeping the data cost of repeat visits down.

Performance Tip #6: Set cache-control Headers On All Cacheable Assets

Note that the cache-control value begins with a directive of public or private, followed by an expiration value (e.g. max-age=31536000). What does the directive mean, and why the oddly specific max-age value?

Screenshot of Google network tab with cache-control column visible

A mixture of max-age values and public/private. (Large preview)

The value 31536000 is the number of seconds there are in a year, and is the theoretical maximum value allowed by the cache-control specification. It is common to see this value applied to all static assets and effectively means “this resource isn’t going to change”. In practice, no browser is going to cache for an entire year, but it will cache the asset for as long as makes sense.

To explain the public/private directive, we must explain the two main caches that exist off the server. First, there is the traditional browser cache, where the resource is stored on the user’s machine (the ‘client’). And then there is the CDN cache, which sits between the client and the server; resources are cached at the CDN level to prevent the CDN from requesting the resource from the origin server over and over again.

Diagram showing how caches interact with the server

Source. (Large preview)

A Cache-Control directive of public allows the resource to be cached in both the client and the CDN. A value of private means only the client can cache it; the CDN is not supposed to. This latter value is typically used for pages or assets that exist behind authentication, where it is fine to be cached on the client but we wouldn’t want to leak private information by caching it in the CDN and delivering it to other users.

Screenshot of Google logo cache-control setting: private, max-age=31536000

A mixture of max-age values and public/private. (Large preview)

One thing that got my attention was that the Google logo has a cache control of “private”. Other images on the page do have a public cache, and I don’t know why the logo would be treated any differently. If you have any ideas, let me know in the comments!

I refreshed the page and most of the resources were served from cache, apart from the page itself, which as you’ve seen already is private, max-age=0, meaning it cannot be cached. This is normal for dynamic web pages where it is important that the user always gets the very latest page when they refresh.

It was at this point I accidentally clicked on an ‘Explanation’ URL in the devtools, which took me to the network analysis reference, costing me about 5 MB of my budget. Oops.

Google Dev Docs

4.2 MB of this new 5 MB page was down to images; specifically SVGs. The weightiest of these was 186 KB, which isn’t particularly big — there were just so many of them, and they all downloaded at once.

Gif scrolling down the very long dev docs page
This is a loooong page. All the images downloaded on page load. (Large preview)

That 5 MB page load was 10% of my budget for today. So far I’ve used 5.5 MB, including the no-JavaScript reload of the Google homepage, and spent $0.40. I didn’t even mean to open this page.

What would have been a better user experience here?

Performance Tip #7: Lazy-load Your Images

Ordinarily, if I accidentally clicked on a link, I would hit the back button in my browser. I’d have received no benefit whatsoever from downloading those images — what a waste of 4.2 MB!

Apart from video, where you generally know what you’re getting yourself into, images are by far the biggest culprit to data usage on the web. A study of the world’s top 500 websites found that images take up to 53% of the average page weight. “This means they have a big impact on page-loading times and subsequently overall performance”.

Instead of downloading all of the images on page load, it is good practice to lazy-load the images so that only users who are engaged with the page pay the cost of downloading them. Users who choose not to scroll below the fold therefore don’t waste any unnecessary bandwidth downloading images they’ll never see.

There’s a great css-tricks.com guide to rolling out lazy-loading for images which offers a good balance between those on good connections, those on poor connections, and those with JavaScript disabled.

If this page had implemented lazy loading as per the guide above, each of the 38 SVGs would have been represented by a 1 KB placeholder image by default, and only loaded into view on scroll.

Performance Tip #8: Use The Right Format For Your Images

I thought that Google had missed a trick by not using WebP, which is an image format that is 26% smaller in size compared to PNGs (with no loss in quality) and 25-34% smaller in size compared to JPEGs (and of a comparable quality). I thought I’d have a go at converting SVG to WebP.

Converting to WebP did bring one of the SVGs down from 186 KB to just 65 KB, but actually, looking at the images side by side, the WebP came out grainy:

Comparison of the two images

The SVG (left) is noticeably crisper than the WebP (right). (Large preview)

I then tried converting one of the PNGs to WebP, which is supposed to be lossless and should come out smaller. However, the WebP output was *heavier* (127 KB, from 109 KB)!

Comparison of the two images

The PNG (left) is a similar quality to the WebP (right) but is smaller at 109 KB compared to 127 KB. (Large preview)

This surprised me. WebP isn’t necessarily the silver bullet we think it is, and even Google have neglected to use it on this page.

So my advice would be: where possible, experiment with different image formats on a per-image basis. The format that keeps the best quality for the smallest size may not be the one you expect.

Now back to the DOM. I came across this:

Screenshot of code

(Large preview)

Notice the async keyword on the Google analytics script?

Screenshot of performance analysis output of devtools

Google analytics has ‘low’ priority. (Large preview)

Despite being one of the first things in the head of the document, this was given a low priority, as we’ve explicitly opted out of being a blocking request by using the async keyword.

A blocking request is one that stops the rendering of the page. A <script> call is blocking by default, stopping the parsing of the HTML until the script has downloaded, compiled and executed. This is why we traditionally put <script> calls at the end of the document.

Performance Tip #9: Avoid Writing Blocking Script Calls

By adding the async attribute to our <script> tag, we’re telling the browser not to stop rendering the page but to download the script in the background. If the HTML is still being parsed by the time the script is downloaded, the parsing is paused while the script is executed, and then resumed. This is significantly better than blocking the rendering as soon as <script> is encountered.

There is also a defer attribute, which is subtly different. <script defer> tells the browser to render the page while the script loads in the background, and even if the HTML is still being parsed by the time the script is downloaded, the script must wait until the page is rendered before it can be executed. This makes the script completely non-blocking. Read “Efficiently load JavaScript with defer and async” for more information.

Anyway, enough Google dissecting. It’s time to try out another site. I’ve still got almost 45 MB of my budget left!

Amazon

screenshot of Amazon homepage

(Large preview)

The Amazon homepage loaded with a total weight of about 6 MB. One of these was a 587 KB image that I couldn’t even find on the page. This was a PNG, presumably to have crisp text, but on a photographic background — a classic combination that’s terrible for performance.

image of spanners with overlaid text: Hands-on time. Discover our tool selection for your car

This grainy image used over 1% of my budget. (Large preview)

In fact, there were a few several-hundred-kilobyte images in my network tab that I couldn’t actually see on the page. I suspect a misconfiguration somewhere on Amazon, but these invisible images combined chewed through at least 1 MB of my data.

How about the hero image? It’s the main image on the page, and it’s only 94 KB transferred — but it could be reduced in size by about 15% if it were cropped directly around the text and footballers. We could then apply the same background color in CSS as is in the image. This has the additional advantage of being resizable down to smaller screens whilst retaining legibility of text.

Screenshot says: Premier league football - Live on Prime video

(Large preview)

I’ve said it once, and I’ll say it again: optimising and lazy-loading your images is the single biggest benefit you can make to the page weight of your site.

Optimizing images provided, by far, the most significant data reduction. You can make the case JavaScript is a bigger deal for overall performance, but not data reduction. Optimizing or removing images is the safest way of ensuring a much lighter experience and that’s the primary optimization Data Saver relies on.
— Tim Kadlec, Making Sense of Chrome Lite Pages

To be fair to Amazon, if I resize the browser to a mobile size and refresh the page, the site is optimized for mobile and the total page weight is only 2.1 MB.

101 requests, 2.1 MB transferred

101 requests, 2.1 MB transferred. (Large preview)

But this brings me onto my next point…

Performance Tip #10: Don’t Make Assumptions About Data Connections

It’s difficult to detect if someone on a desktop is on a broadband connection or is tethering through a data-limited dongle or mobile. Many people work on the train like that, or live in an area where broadband infrastructure is poor but mobile signal is strong. In Amazon’s case, there is room to make some big data savings on the desktop site and we shouldn’t get complacent just because the screen size suggests I’m not on a mobile device.

Yes, we should expect a larger page load if our viewport is ‘desktop sized’ as the images will be larger and better optimized for the screen than a grainier mobile one. But the page shouldn’t be orders of magnitude bigger.

Moreover, I was sending the Save-Data header with my request. This header explicitly indicates a preference for reduced data usage, and I hope more websites start to take notice of it in the future.

The initial ‘desktop’ load may have been 6 MB, but after sitting and watching it for a minute it had climbed to 8.6 MB as the lower-priority resources and event tracking kicked into action. This page weight includes almost 1.7 MB of minified JavaScript. I don’t even want to begin to look at that.

Performance Tip #11: Use Web Workers For Your JavaScript

Which would be worse — 1.7 MB of JavaScript or 1.7 MB of images? The answer is JavaScript: the two assets are not equivalent when it comes to performance.

A JPEG image needs to be decoded, rasterized, and painted on the screen. A JavaScript bundle needs to be downloaded and then parsed, compiled, executed —and there are a number of other steps that an engine needs to complete. Be aware that these costs are not quite equivalent.
— Addy Osmani, The Cost of JavaScript in 2018

If you must ship this much JavaScript, try putting it in a web worker. This keeps the bulk of JavaScript off the main thread, which is now freed up for repainting the UI, helping your web page to stay responsive on low-powered devices.

I’m now about 15.5 MB into my budget, and have spent $1.14 of my Zimbabwe data budget. I’d have had to have worked for half a day as a teacher to earn the money to get this far.

Pinterest

I’ve heard good things about Pinterest’s performance, so I decided to put it to the test.

A staggering 327 requests, making 6.1 MB of data.

A staggering 327 requests, making 6.1 MB of data. (Large preview)

Perhaps this isn’t the fairest of tests; I was taken to the sign-in page, upon which an asynchronous process found I was logged into Facebook and logged me in automatically. The page loaded relatively quickly, but the requests crept up as more and more content was preloaded.

However, I saw that on subsequent page loads, the service worker surfaced much of the content — saving about half of the page weight:

8.2 / 15.6 MB resources, and 39 / 180 requests handled by the service worker cache.

8.2 / 15.6 MB resources, and 39 / 180 requests handled by the service worker cache. (Large preview)

The Pinterest site is a progressive web app; it installed a service worker to manually handle caching of CSS and JS. I could now turn off my WiFi and continue to use the site (albeit not very usefully):

Loading spinner and message saying: you're not connected to the internet
You can’t do much when you’re offline. (Large preview)
Performance Tip #12: Use Service Workers To Provide Offline Support

Wouldn’t it be great if I only had to load a website once over network, and now get all the information I need even if I’m offline?

A great example would be a website that shows the weather forecast for the week. I should only need to download that page once. If I turn off my mobile data and subsequently go back to the page at some point, it should be able to serve the last known content to me. If I connect to the internet again and load the page, I would get a more up to date forecast, but static assets such as CSS and images should still be served locally from the service worker.

This is possible by setting up a service worker with a good caching strategy so that cached pages can be re-accessed offline. The lodash documentation website is a nice example of a service worker in the wild:

Screenshot of devtools showing 'ServiceWorker' next to each request

The Lodash docs work offline. (Large preview)

Content that rarely updates and is likely to be used quite regularly is a perfect candidate for service worker treatment. Dynamic sites with ever-changing news feeds aren’t quite so well suited for offline experiences, but can still benefit.

Screenshot of Chris Ashton profile on Pinterest

The second Pinterest page load was 443 KB. (Large preview)

Service workers can truly save the day when you’re on a tight data budget. I’m not convinced the Pinterest experience was the most optimal in terms of data usage – subsequent pages were around the 0.5 MB mark even on pages with few images — but letting your JavaScript handle page requests for you and keeping the same navigational elements in place can be very performant. The BBC manages a transfer size of just 3.1 KB for return-visits to articles that are renderable via the single page application.

So far, Pinterest alone has chewed through 14 MB, which means I’ve blown around 30 MB of my budget, or $2.20 (almost a day’s wages) of my Zimbabwe budget.

I’d better be careful with my final 20 MB… but where’s the fun in that?

Gamespot

I picked this one because it felt noticeably sluggish on my mobile in the past and I wanted to dig into the reasons why. Sure enough, loading the homepage consumes 8.5 MB of data.

Screenshot of devtools alongside homepage

The Gamespot homepage trickled up to 8.5 MB, and a whopping 347 requests. (Large preview)

6.5 MB of this was down to an autoplaying video halfway down the page, which — to be fair — didn’t appear to download until I began scrolling. Nevertheless…

Gif screenshot of the video within my viewport
The video is clipped off-screen. (Large preview)

I could only see half the video in my viewport — the right hand side was clipped. It was also 30 seconds long, and I would wager that most people won’t sit and watch the whole thing. This single asset more than tripled the size of the page.

Performance Tip #13: Don’t Preload Video

As a rule, unless your site’s primary mode of communication is video, don’t preload it.

If you’re YouTube or Netflix, it’s reasonable to assume that someone coming to your page will want the video to auto-load and auto-play. There is an expectation that the video will chew through some data, but that it’s a fair exchange for the content. But if you’re a site whose primary medium is text and image — and you just happen to offer additional video content — then don’t preload the video.

Think of news articles with embedded videos. Many users only want to skim the article headline before moving on to their next thing. Others will read the article but ignore any embeds. And others will diligently click and watch each embedded video. We shouldn’t hog the bandwidth of every user on the assumption that they’re going to want to watch these videos.

To reiterate: users don’t like autoplaying video. As developers we only do it because our managers tell us to, and they only tell us to do it because all the coolest apps are doing it, and the coolest apps are only doing it because video ads generate 20 to 50 times more revenue than traditional ads. Google Chrome has started blocking autoplay videos for some sites, based on personal preferences, so even if you develop your site to autoplay video, there’s no guarantee that’s the experience your users are getting.

If we agree that it’s a good idea to make video an opt-in experience (click to play), we can take it a step further and make it click to load too. That means mocking a video placeholder image with a play button over it, and only downloading the video when you click the play button. People on fast connections should notice no difference in buffer speed, and people on slow connections will appreciate how fast the rest of your site loaded because it didn’t have to preload a large video file.

Anyway, back to Gamespot, where I was indeed forced to preload a large video file I ended up not watching. I then clicked through to a game review page that weighed another 8.5 MB, this time with 5.4 MB of video, before I even started scrolling down the page.

What was really galling was when I looked at what the video actually was. It was an advert for a Samsung TV! This advert cost me $0.40 of my Zimbabwe wages. Not only was it pre-loaded, but it also didn’t end up playing anywhere as far as I’m aware, so I never actually saw it.

Screenshot of the offending request

This advert wasted 5.4 MB of my precious data. (Large preview)

The ‘real’ video — the gameplay footage (in other words, the content) — wasn’t actually loaded until I clicked on it. And that ploughed through my remaining data in seconds.

That’s it. That’s my 50 MB gone. I’ll need to work another 1.5 days as a Zimbabwean schoolteacher to repeat the experience.

Performance Tip #14: Optimize For First Page Load

What’s striking is that I used 50 MB of data and in most cases, I only visited one or two pages on any given site. If you think about it, this is true of most user journeys today.

Think about the last time you Googled something. You no doubt clicked on the first search result. If you got your answer, you closed the tab, or else you hit the back button and moved onto the next search result.

With the exception of a few so-called ‘destination sites’ such as Facebook or YouTube, where users habitually go as a starting point for other activities, the majority of user journeys are ephemeral. We stumble across random sites to get the answers to our questions, never to return to those sites again.

Web development practices are heavily skewed towards optimising for repeat visitors. “Cache these assets — they’ll come in handy later”. “Pre-load this onward journey, in case the user clicks to read more”. “Subscribe to our mailing list”.

Instead, I believe we should optimize heavily for one-off visitors. Call it a controversial opinion, but maybe caching isn’t really all that important. How important can a cached resource that never gets surfaced again be? And perhaps users aren’t actually going to subscribe to your mailing list after reading just the one article, so downloading the JavaScript and CSS for the mail subscription modal is both a waste of data and an annoying user experience.

The Decline Of Proxy Browsers

I had hoped to try out Opera Mini as part of this experiment. Opera Mini is a mobile web browser which proxies web pages through Opera’s compression servers. It accounts for 1.42% of global traffic as of June 2019, according to caniuse.com.

Opera Mini claims to save up to 90% of data by doing some pretty intensive transcoding. HTML is parsed, images are compressed, styling is applied, and a certain amount of JavaScript is executed on Opera’s servers. The server doesn’t respond with HTML as you might expect — it actually transcodes the data into Opera Binary Markup Language (OBML), which is progressively loaded by Opera Mini on the device. It renders what is essentially an interactive ‘snapshot’ of the web page — think of it as a PDF with hyperlinks inside it. Read Tiffany Brown’s excellent article, “Opera Mini and JavaScript” for a technical deep-dive.

It would have been a perfect way to eek my 50 MB budget as far as possible. Unfortunately, Opera Mini is no longer available on iOS in the UK. Attempting to visit it in the app store throws an error:

The item you've requested is not currently available in the UK Store.

(Large preview)

It’s still available “in some markets” but reading between the lines, Opera will be phasing out Opera Mini for its new app — Opera Touch — which doesn’t have any data-saving functionality apart from the ability to natively block ads.

Opera desktop used to have a ‘Turbo mode’, acting as a traditional proxy server (returning a HTML document instead of OBML), applying data-saving techniques but less intensively than Opera Mini. According to Opera, JavaScript continues to work and “you get all the videos, photos and text that you normally would, but you eat up less data and load pages faster”. However, Opera quietly removed Turbo mode in v60 earlier this year, and Opera Touch doesn’t have a Turbo mode either. Turbo mode is currently only available on Opera for Android.

Android is where all the action is in terms of data-saving technology. Chrome offers a ‘Lite mode’ on its mobile browser for Android, which is not available for iPhones or iPads because of “platform constraints“. Outside of mobile, Google used to provide a ‘Data Saver’ extension for Chrome desktop, but this was canned in April.

Lite mode for Chrome Android can be forcibly enabled, or automatically kicks in when the network’s effective connection type is 2G or worse, or when Chrome estimates the page will take more than 5 seconds to reach first contentful paint. Under these conditions, Chrome will request the lite version of the HTTPS URL as cached by Google’s servers, and display this stripped-down version inside the user’s browser, alongside a “Lite” marker in the address bar.

Screenshot showing button in toolbar denoting 'Lite' mode

‘Lite mode’ on Chrome for Android. Image: Google. (Large preview)

I’d love to try it out — apparently it disables scripts, replaces images with placeholders, prevents loading of non-critical resources and shows offline copies of pages if one is available on the device. This saves up to 60% of data. However, it isn’t available in private (Incognito) mode, which hints at some of the privacy concerns surrounding proxy browsers.

Lite mode shares the HTTPS URL with Google, therefore it makes sense that this mode isn’t available in Incognito. However other information such as cookies, login information, and personalised page content is not shared with Google — according to ghacks.net — and “never breaks secure connections between Chrome and a website”. One wonders why seemingly none of these data-saving services are allowed on iOS (and there is no news as to whether Lite mode will ever become available on iOS).

Data saver proxies require a great deal of trust; your browsing activity, cookies and other sensitive information are entrusted to some server, often in another country. Many proxies simply won’t work anymore because a lot of sites have moved to HTTPS, meaning initiatives such as Turbo mode have become a largely “useless feature“. HTTPS prevents this kind of man-in-the-middle behaviour, which is a good thing, although it has meant the demise of some of these proxy services and has made sites less accessible to those on poor connections.

I was unable to find any OSX or iOS compatible data-saving tool except for Bandwidth Hero for Firefox (which requires setting up your own data compression service — far beyond the technical capabilities of most users!) and skyZIP Proxy (which, last updated in 2017 and riddled with typos, I just couldn’t bring myself to trust).

Conclusion

Reducing the data footprint of your website goes hand in hand with improving frontend performance. It is the single most reliable thing you can do to speed up your site.

In addition to the cost of data, there are lots of good reasons to focus on performance, as described in a GOV.UK blog post on the subject:

  • 53% of users will abandon a mobile site if it takes more than 3 seconds to load.
  • People have to concentrate 50% more when trying to complete a simple task on a website using a slow connection.
  • More performant web pages are better for the battery life of the user’s device, and typically require less power on the server to deliver. A performant site is good for the environment.

We don’t have the power to change the global cost of data inequality. But we do have the power to lessen its impact, improving the experience for everyone in the process.

Smashing Editorial
(yk,ra)

Source: Smashing Magazine, I Used The Web For A Day On A 50 MB Budget

Should You Create An MVP Before Creating An App?

dreamt up by webguru in Uncategorized | Comments Off on Should You Create An MVP Before Creating An App?

Should You Create An MVP Before Creating An App?

Should You Create An MVP Before Creating An App?

Suzanne Scacca



Can you afford to gamble on an idea for an app or on an assumption about how consumers will respond to it? I bet your clients aren’t too comfortable doing that either, especially when it’s their money and reputation on the line.

An app can be a risky investment for a business if it’s not approached with care. Even then, the most well-researched of app concepts can lead to disappointing user download and retention rates.

Whether you’re in the business of building mobile apps or SaaS products, have you thought about using minimum viable products (MVPs) to safeguard your clients’ investments?

Not only do MVPs allow you to get projects through your pipeline more quickly, but they enable developers to create stronger products overall for their clients.

Here’s what you need to know.

The Value Of MVPs In App Development

Frank Robinson was the first to define what an MVP was back in 2001. At its root, an MVP is a scaled-back version of a product that’s released to the public for the purposes of testing and validating the product’s concept and viability on the market.

Eric Ries, the author of The Lean Startup, was one of the early advocates of MVPs and he had some interesting things to say about why and how we should use them back in 2013:

The point isn’t to create leaner products. It’s to get the most basic version or concept of an app into the hands of adopters and evangelists. That way, the developer collects user feedback early on that, in turn, is used to properly shape the product into its final version.

Take Dropbox, for example. This is what the product’s landing page looked like back in 2009:

Dropbox in 2009

The Dropbox website and software from 2009. (Source: Dropbox) (Large preview)

It’s a simple page that includes the company name, an explanation of the software and a link to download the desktop or mobile app. For users that want to know more about what they’re getting, the “tour” took them to a mini-site with more information:

Dropbox MVP description

Dropbox’s MVP provides basic details on its software. (Source: Dropbox) (Large preview)

That’s a far cry from the powerhouse storage, content creation and collaboration service that both consumers and businesses use today:

Dropbox website 2019

The Dropbox website and SaaS in 2019. (Source: Dropbox) (Large preview)

But that’s the beauty of the MVP. Essentially, it forces developers to build products with only the minimum — but absolutely essential — set of features.

Dropbox didn’t need to foresee the power of cloud storage services or to create something that wasn’t right for the market at the time. All it needed to do was launch a simple solution that users needed then and there. Users could then validate the product and provide the company with the direction it needed to take its product.

There are other benefits to creating an MVP:

  • You can get the product on the market much more quickly than if you were to wait for the full app to be developed.
  • You get a chance to test the viability of the concept before you commit too many man-hours to the job.
  • You give yourself more room (and maybe even a little forgiveness, too) to work out the kinks in your final product.
  • You save money with an MVP. First, because you only spend time building features that are absolutely needed. Second, because you might find that users are content with the scaled-back version and you won’t need to do much more work to finalize the product.
  • With a tested idea that’s been embraced by users, you have something to bring to investors which could make the rest of the development process go much more smoothly.

As Eric says in the video, an MVP is the best way to maximize your chances of success and to do so in a much shorter timeframe than complete product development allows for.

How To Build A Valuable MVP That Users Want to Test

Your MVP’s success rides on its ability to leverage insights and feedback provided by early adopters — the ones who are 100% on your side, believe in the product and want to help you fill in the gaps. So, don’t lose sight of that.

An MVP is not some half-assed app thrown together. It still needs to be valuable.

Here are some things you must do before you build and launch your MVP:

1. Decide the Product’s Purpose

If you want your app to succeed, it needs to uniquely solve a problem for a large segment of the consumer base. That means your MVP needs to clearly break down what the product does and why users need it.

For example, this is how Uber (then UberCab) sold itself during its beta in 2010:

UberCab website in 2010

The website for Uber’s predecessor, UberCab, in 2010. (Source: Uber) (Large preview)

Like the Dropbox example earlier, it’s extremely simple in concept and no-frills in terms of explaining what it is or why it’s so valuable. But you still get the idea. It’s an app that lets people order and pay for a car from their phone. Essentially, it’s a convenient substitution for cabs.

Jump ahead a year, you’ll see that Uber started to firm up its identity and value proposition with its official product launch:

Uber website in 2011

Uber starts to refine its image in 2011 after beta testing completes. (Source: Uber) (Large preview)

This was back in 2011 when Uber dropped the “Cab” and labeled itself an on-call private driving service. It was a way to let consumers experience certain luxurious privileges they might not otherwise have been able to afford.

Although that’s not the final form Uber ended up taking, you can see how early user feedback helped the product developers decide which parts of the platform were really worth highlighting and building upon.

This is exactly the kind of thing that will happen when you build an MVP and start to gather valuable insights from users about what they want and which features they need. But, first, you have to start by getting clear on its general purpose and value. You can refine it later on.

2. Locate Your Ideal Users

You have your concept. Now, it’s time to figure out if consumers are going to want it. Even though an MVP is cheaper and faster to build doesn’t mean it won’t end up a complete waste of your time and resources. You have to at least confirm that the interest is there and then define, in clear terms, who your target user is.

Specifically, you need to think about location.

In the Uber example above, you can see that the beta product was only tested in San Francisco.

The initial version of Airbnb did something similar. Joe Gebbia, the co-founder of Airbnb, tells the story of his MVP on a 2017 episode of How I Built This.

Basically, he was low on cash and decided to rent out air mattresses in his San Francisco apartment for an upcoming conference. Knowing that hotels would be short on rooms, he figured he could make money off of it. But it wasn’t just rent money he made. He got an idea for a new business after a lot of people showed interest in renting space in his apartment.

So, he and his partner created a website called “AirBed & Breakfast”. Once it went live, though, it spread far beyond the original San Francisco test area.

Airbnb in 2009

An early version of the AirBnB concept from 2009. (Source: Airbnb) (Large preview)

In 2009, there were AirBnB rentals in 72 countries. Today, you practically have your pick of the litter in any town around the world. But it all started with San Francisco.

So, as you set about building your product, think about where the best places will be to test and get feedback on your app before you do a full release of it. You want the area to be a good representation of the population and demographics you aim to target. You also have to make sure there’s a demand for the product and that your target users can afford to use it (once you start monetizing).

3. Choose an MVP Format

The format of your MVP is another important thing to think about before you do any building.

In some cases, you’re going to have to build a workable product. For example, let’s say your goal is to build a new dating app. There are tons of dating apps on the market; with two apps, in particular, that continually dominate the pack. You know that building any sort of mobile dating app would be a huge and costly gamble, no matter how much you cut back on features. So, what do you do?

You could build a PWA dating app instead. The costs would be lower, the time-to-market significantly faster and it would be much easier to get your MVP in front of users than if you were to put something on the app store. You might even find that the PWA suffices in terms of product format in the end.

In other cases, the MVP won’t even need to be an actual product. It can just be a website announcing the product or providing a wireframe/prototype of the concept.

In 2018, Rand Fishkin announced that he was leaving Moz, the company he co-founded in 2004. Simultaneously, he announced a new product called SparkToro.

SparkToro landing page

The SparkToro MVP landing page describes the upcoming product, but doesn’t provide access. (Source: SparkToro) (Large preview)

Now, Rand is someone who can launch a concept as an MVP and for it to still be successful. He has a long-standing history and solid reputation in this space, so, of course, users are going to gravitate to this new product despite it being unavailable for consumption.

For those of you building an MVP for a new brand, you probably won’t get so lucky. However, it’s really going to depend on what type of product you’re planning to build.

If there’s absolutely no way to create the product in a scaled-back version, then this could be an option worth exploring. It would also be a good idea if you or your client have absolutely no funds and need validated feedback to prove the viability of your concept to investors. That’s really the only way I see Joe Schmoes getting away with this.

If you do go this route, you’re going to need a really good explainer section, too. This is what SparkToro has on its What We’re Building page:

SparkToro What We’re Building

SparkToro’s website explains what it’s working on building. (Source: SparkToro) (Large preview)

I think for the kinds of users that would gravitate to a product like this — namely, advanced marketers that actually need this kind of solution — this way of testing the concept and viability of the features is good. It’s written in their language and with visuals they understand.

However, for users that aren’t familiar with your brand or aren’t as well-trained as Rand’s audience, a wireframe or prototype of the product’s dashboard would be a better idea. Even an explainer video from the founder would work well. It just needs to be something that convinces users to sign up and start providing feedback as early as possible.

4. Find Your Actual Minimum

If you watch the video of Eric Ries, you’ll see that he provides a formula for defining the minimum features of your MVP. It goes like this:

# of minimum features you think you need / 8 = The True Minimum

It makes sense if that formula makes you feel apprehensive. But think about it like this:

You build an MVP that is as simple as it can possibly be without it becoming useless. You ship it out to users and give them a chance to provide feedback.

A few things might happen as a result:

They absolutely hate it.

They complain to you about how Feature A sucks and how they wish it did something else or how Feature B was almost there, but then fell short of expectations. That’s perfect! Your test users will tell you exactly what they want from your product. Get enough consistent feedback and you’ll have a list of must-have features that need to appear in the next version of the app.

They’re okay with it, but don’t love it… yet.

Again, it’s okay if users aren’t 100% happy with it. You’ve given them an opportunity to try out a product that’s going to be awesome and they see the promise in it. Give them a chance to speak their mind and let you know what they loved and what they didn’t. Then, focus on strengthening those weak points and including the features that will make it a real game-changer.

They’ll love it as is.

Let’s be honest, this isn’t likely to happen. But wouldn’t it be great if feedback was so scarce that you could roll with the MVP as is? Plus, think of all that time you saved yourself and money you saved your client by cutting the product back so much. Sometimes simpler is better.

Don’t forget to thank these users for their feedback and support of the product. There’s no way you could create the solution they need without their insights and so it would be in your best interest to recognize the role they play in this. In return, they’ll continue to be your product’s evangelists long after launch.

5. Design Your Landing Page Early

Although I’m not too keen on a landing page or mini-website serving solely as the MVP (for the aforementioned reason), I do think it’s a good idea to get a mobile-first landing page up while the MVP is in the works.

Gaming apps and SaaS would be especially good choices to launch a beta signup page early. Here’s an example from Hytale:

Hytale gaming app

Gaming app Hytale uses its landing page to educate users on the game and get beta users before launch. (Source: Hytale) (Large preview)

If you want your MVP to succeed, you should put some of that extra time you now have towards building out a strong landing page. Start by researching the early websites from the companies featured in this post. They all successfully explained their concepts, soft-marketed their products and convinced early users to sign up for testing.

While you’re at it, you should set up your blog, social media accounts and community features (with an active newsletter), too. You never know. Someone might find the announcement of your MVP somewhere other than Google search and decide they want to bookmark the site or sign up early to be a beta tester.

It’s never too soon to start getting buy-in from your user set!

6. Define Your Success Criteria

Last but not least, you have to decide how you’re going to measure the success of your MVP. Because it’s not just about the quality of feedback.

Consider the following:

  • How many visitors visited your landing page?
  • How many of those people signed up for the beta?
  • How many users did you retain over a set period (1 month, 3 months, etc.)?
  • How many people provided feedback and was it a substantial enough set to make solid decisions about the product design and features going forward?
  • Did the demographics of your user set match the audience you designed the app for? Why do you think that was?
  • How much time, on average, did users spend inside the app?
  • Which features did they spend the most time with? The least?
  • Which features received the most favorable feedback? The least?
  • Were there certain users who had a positive experience with the product? What made them different?

Take all of the information you gathered — from the original landing page, the beta testers, the usage data and so on — and really look over it all. What is it telling you about the MVP you’ve designed? And, now, what are you going to do with it?

Will you leave it as is or build it out to the full product it’s meant to be and that users want?

Will it be easy to attract and acquire customers based on the usage data you’ve collected? What’s more, will you be able to retain those users or is it more cost-effective to keep your app browser-side instead of in native app form?

And, finally, how much can and should you charge for access to the product? Will it ultimately make the company profitable or is there just not enough interest (at least in the monetization side of things) to make this a worthwhile venture?

I know I’m leaving you with a lot of questions, but there’s a lot you’re going to need to sort out once the tests begin. Plus, it’s the whole reason you created an MVP in the first place. This user feedback is invaluable to the process and is the only way you’re going to know if it’s a product worth pushing out to the market or taking back to the drawing board.

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, Should You Create An MVP Before Creating An App?

Collective #535

dreamt up by webguru in Uncategorized | Comments Off on Collective #535



Divi

Our Sponsor

Divi: The Powerful Visual Page Builder

Divi is a revolutionary WordPress theme and visual page builder for WordPress. With Divi, you can build your website visually. Add, arrange and design content and watch everything happen instantly right before your eyes.

Try it




C535_rooki

Rooki

Rooki is an online magazine for students, interns and juniors with intimate interviews, free design student awards and free resources.

Check it out






C535_zindex

Index fun

A very interesting analysis on which z-index values are used on websites. By Philippe Suter.

Read it






C535_multistep

Progress Tracker

An updated HTML component to illustrate the steps in a multi step process e.g. a multi step form, a timeline or a quiz.

Check it out






C535_fontmur

Free Font: Le Murmure

Recently expanded with Cyrillic, Le Murmure is a custom open-source typeface designed by Jérémy Landes for the design agency Murmure and released by Velvetyne Type Foundry.

Get it




Collective #535 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #535

Designing And Building A Progressive Web Application Without A Framework (Part 2)

dreamt up by webguru in Uncategorized | Comments Off on Designing And Building A Progressive Web Application Without A Framework (Part 2)

Designing And Building A Progressive Web Application Without A Framework (Part 2)

Designing And Building A Progressive Web Application Without A Framework (Part 2)

Ben Frain



The raison d’être of this adventure was to push your humble author a little in the disciplines of visual design and JavaScript coding. The functionality of the application I’d decided to build was not dissimilar to a ‘to do’ application. It is important to stress that this wasn’t an exercise in original thinking. The destination was far less important than the journey.

Want to find out how the application ended up? Point your phone browser at https://io.benfrain.com.

Read Part One of Designing And Building A Progessive Web Application Without A Framework.

Here is a summary of what we will cover in this article:

  • The project set-up and why I opted for Gulp as a build tool;
  • Application design patterns and what they mean in practice;
  • How to store and visualize application state;
  • how CSS was scoped to components;
  • what UI/UX niceties were employed to make the things more ‘app-like’;
  • How the remit changed through iteration.

Let’s start with the build tools.

Build Tools

In order to get my basic tooling of TypeScipt and PostCSS up and running and create a decent development experience, I would need a build system.

In my day job, for the last five years or so, I have been building interface prototypes in HTML/CSS and to a lesser extent, JavaScript. Until recently, I have used Gulp with any number of plugins almost exclusively to achieve my fairly humble build needs.

Typically I need to process CSS, convert JavaScript or TypeScript to more widely supported JavaScript, and occasionally, carry out related tasks like minifying code output and optimizing assets. Using Gulp has always allowed me to solve those issues with aplomb.

For those unfamiliar, Gulp lets you write JavaScript to do ‘something’ to files on your local file system.
To use Gulp, you typically have a single file (called gulpfile.js) in the root of your project. This JavaScript file allows you to define tasks as functions. You can add third-party ‘Plugins’, which are essentially further JavaScript functions, that deal with specific tasks.

An Example Gulp Task

An example Gulp task might be using a plugin to harness PostCSS to process to CSS when you change an authoring style sheet (gulp-postcss). Or compiling TypeScript files to vanilla JavaScript (gulp-typescript) as you save them. Here is a simple example of how you write a task in Gulp. This task uses the ‘del’ gulp plugin to delete all the files in a folder called ‘build’:

var del = require("del");

gulp.task("clean", function() {
  return del(["build/**/*"]);
});

The require assigns the del plugin to a variable. Then the gulp.task method is called. We name the task with a string as the first argument (“clean”) and then run a function, which in this case uses the ‘del’ method to delete the folder passed to it as an argument. The asterisk symbols there are ‘glob’ patterns which essentially say ‘any file in any folder’ of the build folder.

Gulp tasks can get heaps more complicated but in essence, that is the mechanics of how things are handled. The truth is, with Gulp, you don’t need to be a JavaScript wizard to get by; grade 3 copy and paste skills are all you need.

I’d stuck with Gulp as my default build tool/task runner for all these years with a policy of ‘if it ain’t broke; don’t try and fix it’.

However, I was worried I was getting stuck in my ways. It’s an easy trap to fall into. First, you start holidaying the same place every year, then refusing to adopt any new fashion trends before eventually and steadfastly refusing to try out any new build tools.

I’d heard plenty of chatter on the Internets about ‘Webpack’ and thought it was my duty to try a project using the new-fangled toast of the front-end developer cool-kids.

Webpack

I distinctly remember skipping over to the webpack.js.org site with keen interest. The first explanation of what Webpack is and does started like this:

import bar from './bar';

Say what? In the words of Dr. Evil, “Throw me a frickin’ bone here, Scott”.

I know it’s my own hang-up to deal with but I’ve developed a revulsion to any coding explanations that mention ‘foo’, ‘bar’ or ‘baz’. That plus the complete lack of succinctly describing what Webpack was actually for had me suspecting it perhaps wasn’t for me.

Digging a little further into the Webpack documentation, a slightly less opaque explanation was offered, “At its core, webpack is a static module bundler for modern JavaScript applications”.

Hmmm. Static module bundler. Was that what I wanted? I wasn’t convinced. I read on but the more I read, the less clear I was. Back then, concepts like dependency graphs, hot module reloading, and entry points were essentially lost on me.

A couple of evenings of researching Webpack later, I abandoned any notion of using it.

I’m sure in the right situation and more experienced hands, Webpack is immensely powerful and appropriate but it seemed like complete overkill for my humble needs. Module bundling, tree-shaking, and hot-module reloading sounded great; I just wasn’t convinced I needed them for my little ‘app’.

So, back to Gulp then.

On the theme of not changing things for change sake, another piece of technology I wanted to evaluate was Yarn over NPM for managing project dependencies. Until that point, I had always used NPM and Yarn was getting touted as a better, faster alternative. I don’t have much to say about Yarn other than if you are currently using NPM and everything is OK, you don’t need to bother trying Yarn.

One tool that arrived too late for me to appraise for this application is Parceljs. With zero configuration and a BrowserSync like browser reloading backed in, I’ve since found great utility in it! In addition, in Webpack’s defense, I’m told that v4 onwards of Webpack doesn’t require a configuration file. Anecdotally, in a more recent poll I ran on Twitter, of the 87 respondents, over half chose Webpack over Gulp, Parcel or Grunt.

I started my Gulp file with basic functionality to get up and running.

A ‘default’ task would watch the ‘source’ folders of style sheets and TypeScript files and compile them out to a build folder along with the basic HTML and associated source maps.

I got BrowserSync working with Gulp too. I might not know what to do with a Webpack configuration file but that didn’t mean I was some kind of animal. Having to manually refresh the browser while iterating with HTML/CSS is soooo 2010 and BrowserSync gives you that short feedback and iteration loop that is so useful for front-end coding.

Here is the basic gulp file as of 11.6.2017

You can see how I tweaked the Gulpfile nearer to the end of shipping, adding minification with ugilify:

Project Structure

By consequence of my technology choices, some elements of code organization for the application were defining themselves. A gulpfile.js in the root of the project, a node_modules folder (where Gulp stores plugin code) a preCSS folder for the authoring style sheets, a ts folder for the TypeScript files, and a build folder for the compiled code to live.

The idea was to have an index.html that contained the ‘shell’ of the application, including any non-dynamic HTML structure and then links to the styles and the JavaScript file that would make the application work. On disk, it would look something like this:

build/
node_modules/
preCSS/
    img/
    partials/
    styles.css
ts/
.gitignore
gulpfile.js
index.html
package.json
tsconfig.json

Configuring BrowserSync to look at that build folder meant I could point my browser at localhost:3000 and all was good.

With a basic build system in place, files organization settled and some basic designs to make a start with, I had run-out of procrastination fodder I could legitimately use to prevent me from actually building the thing!

Writing An Application

The principle of how the application would work was this. There would be a store of data. When the JavaScript loaded it would load that data, loop through each player in the data, creating the HTML needed to represent each player as a row in the layout and placing them in the appropriate in/out section. Then interactions from the user would move a player from one state to another. Simple.

When it came to actually writing the application, the two big conceptual challenges that needed to be understood were:

  1. How to represent the data for an application in a manner that could be easily extended and manipulated;
  2. How to make the UI react when data was changed from user input.

One of the simplest ways to represent a data structure in JavaScript is with object notation. That sentence reads a little computer science-y. More simply, an ‘object’ in JavaScript lingo is a handy way of storing data.

Consider this JavaScript object assigned to a variable called ioState (for In/Out State):

var ioState = {
    Count: 0, // Running total of how many players
    RosterCount: 0; // Total number of possible players
    ToolsExposed: false, // Whether the UI for the tools is showing
    Players: [], // A holder for the players
}

If you don’t really know JavaScript that well, you can probably at least grasp what’s going on: each line inside the curly braces is a property (or ‘key’ in JavaScript parlance) and value pair. You can set all sorts of things to a JavaScript key. For example, functions, arrays of other data or nested objects. Here’s an example:

var testObject = {
  testFunction: function() {
    return "sausages";
  },
  testArray: [3,7,9],
  nestedtObject {
    key1: "value1",
    key2: 2,
  }
}

The net result is that using that kind of data structure you can get, and set, any of the keys of the object. For example, if we want to set the count of the ioState object to 7:

ioState.Count = 7;

If we want to set a piece of text to that value, the notation works like this:

aTextNode.textContent = ioState.Count;

You can see that getting values and setting values to that state object is simple in the JavaScript side of things. However, reflecting those changes in the User Interface is less so. This is the main area where frameworks and libraries seek to abstract away the pain.

In general terms, when it comes to dealing with updating the user interface based upon state, it’s preferable to avoid querying the DOM, as this is generally considered a sub-optimal approach.

Consider the In/Out interface. It’s typically showing a list of potential players for a game. They are vertically listed, one under the other, down the page.

Perhaps each player is represented in the DOM with a label wrapping a checkbox input. This way, clicking a player would toggle the player to ‘In’ by virtue of the label making the input ‘checked’.

To update our interface, we might have a ‘listener’ on each input element in the JavaScript. On a click or change, the function queries the DOM and counts how many of our player inputs are checked. On the basis of that count, we would then update something else in the DOM to show the user how many players are checked.

Let’s consider the cost of that basic operation. We are listening on multiple DOM nodes for the click/check of an input, then querying the DOM to see how many of a particular DOM type are checked, then writing something into the DOM to show the user, UI wise, the number of players we just counted.

The alternative would be to hold the application state as a JavaScript object in memory. A button/input click in the DOM could merely update the JavaScript object and then, based on that change in the JavaScript object, do a single-pass update of the all interface changes that are needed. We could skip querying the DOM to count the players as the JavaScript object would already hold that information.

So. Using a JavaScript object structure for the state seemed simple but flexible enough to encapsulate the application state at any given time. The theory of how this could be managed seemed sound enough too – this must be what phrases like ‘one-way data flow’ were all about? However, the first real trick would be in creating some code that would automatically update the UI based on any changes to that data.

The good news is that smarter people than I have already figured this stuff out (thank goodness!). People have been perfecting approaches to this kind of challenge since the dawn of applications. This category of problems is the bread and butter of ‘design patterns’. The moniker ‘design pattern’ sounded esoteric to me at first but after digging just a little it all started to sound less computer science and more common sense.

Design Patterns

A design pattern, in computer science lexicon, is a pre-defined and proven way of solving a common technical challenge. Think of design patterns as the coding equivalent of a cooking recipe.

Perhaps the most famous literature on design patterns is “Design Patterns: Elements of Reusable Object-Oriented Software” from back in 1994. Although that deals with C++ and smalltalk the concepts are transferable. For JavaScript, Addy Osmani’s “Learning JavaScript Design Patterns” covers similar ground. You can also read it online for free here.

Observer Pattern

Typically design patterns are split into three groups: Creational, Structural and Behavioural. I was looking for something Behavioural that helped to deal with communicating changes around the different parts of the application.

More recently, I have seen and read a really great deep-dive on implementing reactivity inside an app by Gregg Pollack. There is both a blog post and video for your enjoyment here.

When reading the opening description of the ‘Observer’ pattern in Learning JavaScript Design Patterns I was pretty sure it was the pattern for me. It is described thus:

The Observer is a design pattern where an object (known as a subject) maintains a list of objects depending on it (observers), automatically notifying them of any changes to state.

When a subject needs to notify observers about something interesting happening, it broadcasts a notification to the observers (which can include specific data related to the topic of the notification).

The key to my excitement was that this seemed to offer some way of things updating themselves when needed.

Suppose the user clicked a player named “Betty” to select that she was ‘In’ for the game. A few things might need to happen in the UI:

  1. Add 1 to the playing count
  2. Remove Betty from the ‘Out’ pool of players
  3. Add Betty to the ‘In’ pool of players

The app would also need to update the data that represented the UI. What I was very keen to avoid was this:

playerName.addEventListener("click", playerToggle);

function playerToggle() {
  if (inPlayers.includes(e.target.textContent)) {
    setPlayerOut(e.target.textContent);
    decrementPlayerCount();
  } else {
    setPlayerIn(e.target.textContent);
    incrementPlayerCount();
  }
}

The aim was to have an elegant data flow that updated what was needed in the DOM when and if the central data was changed.

With an Observer pattern, it was possible to send out updates to the state and therefore the user interface quite succinctly. Here is an example, the actual function used to add a new player to the list:

function itemAdd(itemString: string) {
  let currentDataSet = getCurrentDataSet();
  var newPerson = new makePerson(itemString);
  io.items[currentDataSet].EventData.splice(0, 0, newPerson);
  io.notify({
    items: io.items
  });
}

The part relevant to the Observer pattern there being the io.notify method. As that shows us modifying the items part of the application state, let me show you the observer that listened for changes to ‘items’:

io.addObserver({
  props: ["items"],
  callback: function renderItems() {
    // Code that updates anything to do with items...
  }
});

We have a notify method that makes changes to the data and then Observers to that data that respond when properties they are interested in are updated.

With this approach, the app could have observables watching for changes in any property of the data and run a function whenever a change occurred.

If you are interested in the Observer pattern I opted for, I describe it more fully here.

There was now an approach for updating the UI effectively based on state. Peachy. However, this still left me with two glaring issues.

One was how to store the state across page reloads/sessions and the fact that despite the UI working, visually, it just wasn’t very ‘app like’. For example, if a button was pressed the UI instantly changed on screen. It just wasn’t particularly compelling.

Let’s deal with the storage side of things first.

Saving State

My primary interest from a development side entering into this centered on understanding how app interfaces could be built and made interactive with JavaScript. How to store and retrieve data from a server or tackle user-authentication and logins was ‘out of scope’.

Therefore, instead of hooking up to a web service for the data storage needs, I opted to keep all data on the client. There are a number of web platform methods of storing data on a client. I opted for localStorage.

The API for localStorage is incredibly simple. You set and get data like this:

// Set something
localStorage.setItem("yourKey", "yourValue");
// Get something
localStorage.getItem("yourKey");

LocalStorage has a setItem method that you pass two strings to. The first is the name of the key you want to store the data with and the second string is the actual string you want to store. The getItem method takes a string as an argument that returns to you whatever is stored under that key in localStorage. Nice and simple.

However, amongst the reasons to not use localStorage is the fact that everything has to be saved as a ‘string’. This means you can’t directly store something like an array or object. For example, try running these commands in your browser console:

// Set something
localStorage.setItem("myArray", [1, 2, 3, 4]);
// Get something
localStorage.getItem("myArray"); // Logs "1,2,3,4"

Even though we tried to set the value of ‘myArray’ as an array; when we retrieved it, it had been stored as a string (note the quote marks around ‘1,2,3,4’).

You can certainly store objects and arrays with localStorage but you need to be mindful that they need converting back and forth from strings.

So, in order to write state data into localStorage it was written to a string with the JSON.stringify() method like this:

const storage = window.localStorage;
storage.setItem("players", JSON.stringify(io.items));

When the data needed retrieving from localStorage, the string was turned back into usable data with the JSON.parse() method like this:

const players = JSON.parse(storage.getItem("players"));

Using localStorage meant everything was on the client and that meant no 3rd party services or data storage concerns.

Data was now persisting refreshes and sessions — Yay! The bad news was that localStorage does not survive a user emptying their browser data. When someone did that, all their In/Out data would be lost. That’s a serious shortcoming.

It’s not hard to appreciate that `localStorage` probably isn’t the best solution for ‘proper’ applications. Besides the aforementioned string issue, it is also slow for serious work as it blocks the ‘main thread’. Alternatives are coming, like KV Storage but for now, make a mental note to caveat its use based on suitability.

Despite the fragility of saving data locally on a users device, hooking up to a service or database was resisted. Instead, the issue was side-stepped by offering a ‘load/save’ option. This would allow any user of In/Out to save their data as a JSON file which could be loaded back into the app if needed.

This worked well on Android but far less elegantly for iOS. On an iPhone, it resulted in a splurge of text on screen like this:

On an iPhone, it resulted in a splurge of text on screen

(Large preview)

As you can imagine, I was far from alone in berating Apple via WebKit about this shortcoming. The relevant bug was here.

At the time of writing this bug has a solution and patch but has yet to make its way into iOS Safari. Allegedly, iOS13 fixes it but it’s that’s in Beta as I write.

So, for my minimum viable product, that was storage addressed. Now it was time to attempt to make things more ‘app-like’!

App-I-Ness

Turns out after many discussions with many people, defining exactly what ‘app like’ means is quite difficult.

Ultimately, I settled on ‘app-like’ being synonymous with a visual slickness usually missing from the web. When I think of the apps that feel good to use they all feature motion. Not gratuitous, but motion that adds to the story of your actions. It might be the page transitions between screens, the manner in which menus pop into existence. It’s hard to describe in words but most of us know it when we see it.

The first piece of visual flair needed was shifting player names up or down from ‘In’ to ‘Out’ and vice-versa when selected. Making a player instantly move from one section to the other was straightforward but certainly not ‘app-like’. An animation as a player name was clicked would hopefully emphasize the result of that interaction – the player moving from one category to another.

Like many of these kinds of visual interactions, their apparent simplicity belies the complexity involved in actually getting it working well.

It took a few iterations to get the movement right but the basic logic was this:

  • Once a ‘player’ is clicked, capture where that player is, geometrically, on the page;
  • Measure how far away the top of the area is the player needs to move to if going up (‘In’) and how far away the bottom is, if going down (‘Out’);
  • If going up, a space equal to the height of the player row needs to be left as the player moves up and the players above should collapse downwards at the same rate as the time it takes for the player to travel up to land in the space vacated by the existing ‘In’ players (if any exist) coming down;
  • If a player is going ‘Out’ and moving down, everything else needs to move up to the space left and the player needs to end up below any current ‘Out’ players.

Phew! It was trickier than I thought in English — never mind JavaScript!

There were additional complexities to consider and trial such as transition speeds. At the outset, it wasn’t obvious whether a constant speed of movement (e.g. 20px per 20ms), or a constant duration for the movement (e.g. 0.2s) would look better. The former was slightly more complicated as the speed needed to be computed ‘on the fly’ based upon how far the player needed to travel — greater distance requiring a longer transition duration.

However, it turned out that a constant transition duration was not just simpler in code; it actually produced a more favorable effect. The difference was subtle but these are the kind of choices you can only determine once you have seen both options.

Every so often whilst trying to nail this effect, a visual glitch would catch the eye but it was impossible to deconstruct in real time. I found the best debugging process was creating a QuickTime recording of the animation and then going through it a frame at a time. Invariably this revealed the problem quicker than any code based debugging.

Looking at the code now, I can appreciate that on something beyond my humble app, this functionality could almost certainly be written more effectively. Given that the app would know the number of players and know the fixed height of the slats, it should be entirely possible to make all distance calculations in the JavaScript alone, without any DOM reading.

It’s not that what was shipped doesn’t work, it’s just that it isn’t the kind of code solution you would showcase on the Internet. Oh, wait.

Other ‘app like’ interactions were much easier to pull off. Instead of menus simply snapping in and out with something as simple as toggling a display property, a lot of mileage was gained by simply exposing them with a little more finesse. It was still triggered simply but CSS was doing all the heavy lifting:

.io-EventLoader {
  position: absolute;
  top: 100%;
  margin-top: 5px;
  z-index: 100;
  width: 100%;
  opacity: 0;
  transition: all 0.2s;
  pointer-events: none;
  transform: translateY(-10px);
  [data-evswitcher-showing="true"] & {
    opacity: 1;
    pointer-events: auto;
    transform: none;
  }
}

There when the data-evswitcher-showing="true" attribute was toggled on a parent element, the menu would fade in, transform back into its default position and pointer events would be re-enabled so the menu could receive clicks.

ECSS Style Sheet Methodology

You’ll notice in that prior code that from an authoring point of view, CSS overrides are being nested within a parent selector. That’s the way I always favor writing UI style sheets; a single source of truth for each selector and any overrides for that selector encapsulated within a single set of braces. It’s a pattern that requires the use of a CSS processor (Sass, PostCSS, LESS, Stylus, et al) but I feel is the only positive way to make use of nesting functionality.

I’d cemented this approach in my book, Enduring CSS and despite there being a plethora of more involved methods available to write CSS for interface elements, ECSS has served me and the large development teams I work with well since the approach was first documented way back in 2014! It proved just as effective in this instance.

Partialling The TypeScript

Even without a CSS processor or superset language like Sass, CSS has had the ability to import one or more CSS files into another with the import directive:

@import "other-file.css";

When beginning with JavaScript I was surprised there was no equivalent. Whenever code files get longer than a screen or so high, it always feels like splitting it into smaller pieces would be beneficial.

Another bonus to using TypeScript was that it has a beautifully simple way of splitting code into files and importing them when needed.

This capability pre-dated native JavaScript modules and was a great convenience feature. When TypeScript was compiled it stitched it all back to a single JavaScript file. It meant it was possible to easily break up the application code into manageable partial files for authoring and import then into the main file easily. The top of the main inout.ts looked like this:

/// <reference path="defaultData.ts" />
/// <reference path="splitTeams.ts" />
/// <reference path="deleteOrPaidClickMask.ts" />
/// <reference path="repositionSlat.ts" />
/// <reference path="createSlats.ts" />
/// <reference path="utils.ts" />
/// <reference path="countIn.ts" />
/// <reference path="loadFile.ts" />
/// <reference path="saveText.ts" />
/// <reference path="observerPattern.ts" />
/// <reference path="onBoard.ts" />

This simple house-keeping and organization task helped enormously.

Multiple Events

At the outset, I felt that from a functionality point of view, a single event, like “Tuesday Night Football” would suffice. In that scenario, if you loaded In/Out up you just added/removed or moved players in or out and that was that. There was no notion of multiple events.

I quickly decided that (even going for a minimum viable product) this would make for a pretty limited experience. What if somebody organized two games on different days, with a different roster of players? Surely In/Out could/should accommodate that need?
It didn’t take too long to re-shape the data to make this possible and amend the methods needed to load in a different set.

At the outset, the default data set looked something like this:

var defaultData = [
  { name: "Daz", paid: false, marked: false, team: "", in: false },
  { name: "Carl", paid: false, marked: false, team: "", in: false },
  { name: "Big Dave", paid: false, marked: false, team: "", in: false },
  { name: "Nick", paid: false, marked: false, team: "", in: false }
];

An array containing an object for each player.

After factoring in multiple events it was amended to look like this:

var defaultDataV2 = [
  {
    EventName: "Tuesday Night Footy",
    Selected: true,
    EventData: [
      { name: "Jack", marked: false, team: "", in: false },
      { name: "Carl", marked: false, team: "", in: false },
      { name: "Big Dave", marked: false, team: "", in: false },
      { name: "Nick", marked: false, team: "", in: false },
      { name: "Red Boots", marked: false, team: "", in: false },
      { name: "Gaz", marked: false, team: "", in: false },
      { name: "Angry Martin", marked: false, team: "", in: false }
    ]
  },
  {
    EventName: "Friday PM Bank Job",
    Selected: false,
    EventData: [
      { name: "Mr Pink", marked: false, team: "", in: false },
      { name: "Mr Blonde", marked: false, team: "", in: false },
      { name: "Mr White", marked: false, team: "", in: false },
      { name: "Mr Brown", marked: false, team: "", in: false }
    ]
  },
  {
    EventName: "WWII Ladies Baseball",
    Selected: false,
    EventData: [
      { name: "C Dottie Hinson", marked: false, team: "", in: false },
      { name: "P Kit Keller", marked: false, team: "", in: false },
      { name: "Mae Mordabito", marked: false, team: "", in: false }
    ]
  }
];

The new data was an array with an object for each event. Then in each event was an EventData property that was an array with player objects in as before.

It took much longer to re-consider how the interface could best deal with this new capability.

From the outset, the design had always been very sterile. Considering this was also supposed to be an exercise in design, I didn’t feel I was being brave enough. So a little more visual flair was added, starting with the header. This is what I mocked up in Sketch:

A mockup of the revised app design

Revised design mockup. (Large preview)

It wasn’t going to win awards but it was certainly more arresting than where it started.

Aesthetics aside, it wasn’t until somebody else pointed it out, that I appreciated the big plus icon in the header was very confusing. Most people thought it was a way to add another event. In reality, it switched to an ‘Add Player’ mode with a fancy transition that let you type in the name of the player in the same place the event name was currently.

This was another instance where fresh eyes were invaluable. It was also an important lesson in letting go. The honest truth was I had held on to the input mode transition in the header because I felt it was cool and clever. However, the fact was it was not serving the design and therefore the application as a whole.

This was changed in the live version. Instead, the header just deals with events — a more common scenario. Meanwhile, adding players is done from a sub-menu. This gives the app a much more understandable hierarchy.

The other lesson learned here was that whenever possible, it’s hugely beneficial to get candid feedback from peers. If they are good and honest people, they won’t let you give yourself a pass!

Summary: My Code Stinks

Right. So far, so normal tech-adventure retrospective piece; these things are ten a penny on Medium! The formula goes something like this: the dev details how they smashed down all obstacles to release a finely tuned piece of software into the Internets and then pick up an interview at Google or got acqui-hired somewhere. However, the truth of the matter is that I was a first-timer at this app-building malarkey so the code ultimately shipped as the ‘finished’ application stunk to high heaven!

For example, the Observer pattern implementation used worked very well. I was organized and methodical at the outset but that approach ‘went south’ as I became more desperate to finish things off. Like a serial dieter, old familiar habits crept back in and the code quality subsequently dropped.

Looking now at the code shipped, it is a less than ideal hodge-bodge of clean observer pattern and bog-standard event listeners calling functions. In the main inout.ts file there are over 20 querySelector method calls; hardly a poster child for modern application development!

I was pretty sore about this at the time, especially as at the outset I was aware this was a trap I didn’t want to fall into. However, in the months that have since passed, I’ve become more philosophical about it.

The final post in this series reflects on finding the balance between silvery-towered code idealism and getting things shipped. It also covers the most important lessons learned during this process and my future aspirations for application development.

Smashing Editorial
(dm, yk, il, ra)

Source: Smashing Magazine, Designing And Building A Progressive Web Application Without A Framework (Part 2)

Created To Make You Think: Meet Our New Printed Magazine

dreamt up by webguru in Uncategorized | Comments Off on Created To Make You Think: Meet Our New Printed Magazine

Created To Make You Think: Meet Our New Printed Magazine

Created To Make You Think: Meet Our New Printed Magazine

Vitaly Friedman



For a long time, we wanted to create a printed magazine that hasn’t existed before. Not a magazine about fleeting design trends or ever-changing frameworks, but topics that would make us all think, and that would remain useful as time passes. A printed magazine exploring topics that often get lost in the myriad of blog posts out there.

About the Magazine

With Smashing Magazine Print, we want to provide a space for topics that perhaps have more longevity than what we usually cover online — without running short on practical and actionable insights, of course. One theme per issue, with handy, digestible 6080 pages of high-quality print, containing a handful of curated and unpublished articles.

Approachable, friendly and inclusive. Written by and for the community, with a focus on real-world insights and the challenges that we all have to tackle together — for people just like you, who design and build the web. It took us more time than expected, but we are done now, and shipping worldwide. Download a free PDF preview (3 MB) and get your copy now.


The cover of Smashing Magazine Print.


The cover of Smashing Magazine Print.

Print

$ 17.50 $ 24.95Get Smashing Print

Printed magazine + PDF, ePUB, Kindle.
Free airmail shipping worldwide.

Print + Membership

$ 9 / mo.Become a Smasher

Printed magazine for free + DRM-free eBooks, webinars and other fancy things.

All Smashing Members ($5 per month) get the eBook for free, and Smashers ($9 plan) also get a free printed copy shipped to the doorsteps. For additional copies, a discount of $10 applies.

About the Issue

We kick off with an issue exploring topics very close to our hearts — ethics, privacy and security, because these issues reach into all our lives, from our personal use of the web through to the advanced applications we develop. We take a closer look at the current state of tracking, advertising, GDPR and privacy law, data protection and addictive interfaces. We explore how we all can integrate privacy-driven decisions into our workflows by default, abandoning dark patterns for good along the way.


Connecting the dots, an illustration for the table of contents of the Smashing Magazine Print, with chapters connected with dots.
The table of contents of the shiny new printed magazine. Peek inside! (PDF).

The web is wonderfully diverse and unpredictable because of the wonderfully diverse people who shape it. While we often see people as lifeless dots in our analytics stats, every single dot is an actual person, and so every single dot matters. We deserve to be respected and valued, and that holds true for how companies treat our data and privacy, too. In times when humility and kindness have become rarities, we need to find a way to make the web a calmer, more respectful place that treats data and privacy seriously?

That’s what we wanted to explore. So, we got to work. We’ve teamed up with Veerle Pieters for the design and illustrations, and Rachel has been working with the authors to bring valuable insights to the magazine. Rather than asking the authors to write on very specific topics, Rachel asked them to contribute to a collection of thinking on the subjects of ethics, privacy and security.

Table of Contents

What follows is a collection of articles that sit very well together, yet tackle different aspects of the issues at hand. You might not agree with all of them, but we hope they will make you think.

  1. Editor’s Note
    by Rachel Andrew
  2. Towards Ethics By Default, One Step at a Time
    by Vitaly Friedman
  3. Designing For Addiction
    by Trine Falbe
  4. It’s Not About You
    by Heather Burns and Morten Rand-Hendriksen
  5. This One Weird Trick Tells Us Everything About You
    by Laura Kalbag
  6. Quieting Disquiet
    by Stuart Langridge
  7. Advertising Is Not The Problem
    by Cennydd Bowles

Along with the themed articles, we have included some little insights into our Smashing Family, pieces about our books, conferences and membership. You might not know it, but Smashing Magazine is brought to you by a tiny team of people who care. We care about the values we stand behind, and about the people who read the magazine, who join as members, and who come to conferences. We care a lot about the web. We hope that shines through in everything we do.

60 pages. High-quality print. Free worldwide airmail shipping from Germany, late July. Written by and for the community. Download a free PDF preview (3 MB).


The cover of Smashing Magazine Print

Print

$ 17.50 $ 24.95Get Smashing Print

Printed magazine + PDF, ePUB, Kindle.
Free airmail shipping worldwide.

Print + Membership

$ 9 / mo.Become a Smasher

Printed magazine for free + DRM-free eBooks, webinars and other fancy things.

Community Matters ❤️

The printed magazine wouldn’t exist without our dear Smashing community, and that’s why it is and always will be free to Smashing Members.

  • If you haven’t heard of the Membership yet, no worries: you are awesome just the way you are, and you can join our wonderful community any time. We can’t wait to see you!
  • If you are a Supporter ($3 plan), please don’t forget to sign in for your discount to be applied automatically.
  • If you are a Member ($5 plan), please sign in and download the digital version on the product page. Obviously, you also get a discount on the printed copy.
  • If you’re a Smasher ($9 plan), please sign in and save your shipping address, and we’ll ship the book to your doorsteps right away. Also, feel free to download the digital version on the product page.
The cover of Smashing Magazine Print
Our brand-new printed magazine is free for Smashing Members (eBook with the $5 plan, Print + eBook with $9 plan). You can cancel any time, you know.

Admittedly, publishing a printed magazine in a time when everything has gone digital is quite a gamble for us. But you never know for sure until you try — and fail or succeed.

We don’t know how the magazine will take shape in the future, but we already can’t wait to hear your feedback. We’re very excited, and we hope you’ll be, too —at least when you hold the very first issue of the printed magazine in your hands.

Stay smashing, and thank you for your ongoing support, everyone!

Smashing Editorial
(vf, al, il)

Source: Smashing Magazine, Created To Make You Think: Meet Our New Printed Magazine

Smashing TV Live: Towards Ethics & Privacy By Default

dreamt up by webguru in Uncategorized | Comments Off on Smashing TV Live: Towards Ethics & Privacy By Default

Smashing TV Live: Towards Ethics & Privacy By Default

Smashing TV Live: Towards Ethics & Privacy By Default

Vitaly Friedman



Honesty, humility and authenticity became rarities on the web. Dark patterns are prevailing many of the interfaces we use, and our data is collected, evaluated and handed over to third-parties left and right, often without us noticing at all. To many of our customers it’s not really a big deal — privacy is often seen as a fair price for using all the wonderful free services around us.

However, we might underestimate the value of data we willingly hand over to data-driven companies. And very often, while we mindlessly click away a cookie consent prompt, we are giving away the sense of intimacy. After all, nobody loves to search for shoes one day, and be followed by the same shoes across the entire web over weeks to come.


Watch the session
Smashing TV: “Towards Privacy By Default”, a conversation, moderated by Vitaly Friedman. With Trine Falbe, Laura Kalbag, Heather Burns, Cennydd Bowles, Stuart Langridge and Morten Rand-Hendriksen.

Join the live stream @ Today, 1PM GMT ↬

In the panel discussion, we look into:

  • How do we deal with advertising, tracking, sensitive data collection and treatment of data?
  • How do we “sell” ethics and privacy to managers and clients?
  • How do we change the culture of dark patterns in our interfaces?
  • How and when do we integrate privacy, ethics, security in our design/dev workflows?
  • How do we maintain privacy and ethics over time?
  • How does GDPR help us shift the culture on the web?

These aren’t easy questions to answer, and we need reliable solutions to make the difference. But we can figure out a way forward, and how to make privacy-aware decisions a default in our design and development work. These are also the questions we explore in our brand-new Smashing Magazine Print. In fact, it’s the magazine that prompted us to run a live session in the first place.

Meet Our New Printed Magazine

With Smashing Magazine Print, we want to provide a space for important topics that perhaps have more longevity than what we usually cover online — without running short on practical and actionable insights, of course.

We kick off with a 60-pages issue exploring ethics, privacy and security. We take a closer look at the current state of tracking, advertising, GDPR and privacy law, data protection and addictive interfaces. We explore how we all can integrate privacy-driven decisions into our workflows by default, abandoning dark patterns for good along the way.

And — guess what! — it’s ready now, and it’s shipping worldwide. Download a free PDF preview (3 MB) and get your copy now.


The cover of Smashing Magazine Print.


The cover of Smashing Magazine Print.

Print

$ 17.50 $ 24.95Get Smashing Print

Printed magazine + PDF, ePUB, Kindle.
Free airmail shipping worldwide.

Print + Membership

$ 9 / mo.Become a Member

Printed magazine for free + eBooks, webinars, discounts and other fancy things.

Rewatch The Recording!

And that’s a wrap! Watch the recording with Trine Falbe, Laura Kalbag, Heather Burns, Cennydd Bowles, Stuart Langridge and Morten Rand-Hendriksen. Also, prepare and post your questions and we’ll bring them up during the discussion as well, of course!

Watch the recording ↬

Smashing Editorial
(dm, og, il)

Source: Smashing Magazine, Smashing TV Live: Towards Ethics & Privacy By Default

Designing And Building A Progressive Web Application Without A Framework (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on Designing And Building A Progressive Web Application Without A Framework (Part 1)

Designing And Building A Progressive Web Application Without A Framework (Part 1)

Designing And Building A Progressive Web Application Without A Framework (Part 1)

Ben Frain



How does a web application actually work? I don’t mean from the end-user point of view. I mean in the technical sense. How does a web application actually run? What kicks things off? Without any boilerplate code, what’s the right way to structure an application? Particularly a client-side application where all the logic runs on the end-users device. How does data get managed and manipulated? How do you make the interface react to changes in the data?

These are the kind of questions that are simple to side-step or ignore entirely with a framework. Developers reach for something like React, Vue, Ember or Angular, follow the documentation to get up and running and away they go. Those problems are handled by the framework’s box of tricks.

That may be exactly how you want things. Arguably, it’s the smart thing to do if you want to build something to a professional standard. However, with the magic abstracted away, you never get to learn how the tricks are actually performed.

Don’t you want to know how the tricks are done?

I did. So, I decided to try building a basic client-side application, sans-framework, to understand these problems for myself.

But, I’m getting a little ahead of myself; a little background first.

Before starting this journey I considered myself highly proficient at HTML and CSS but not JavaScript. As I felt I’d solved the biggest questions I had of CSS to my satisfaction, the next challenge I set myself was understanding a programming language.

The fact was, I was relatively beginner-level with JavaScript. And, aside from hacking the PHP of WordPress around, I had no exposure or training in any other programming language either.

Let me qualify that ‘beginner-level’ assertion. Sure, I could get interactivity working on a page. Toggle classes, create DOM nodes, append and move them around, etc. But when it came to organizing the code for anything beyond that I was pretty clueless. I wasn’t confident building anything approaching an application. I had no idea how to define a set of data in JavaScipt, let alone manipulate it with functions.

I had no understanding of JavaScript ‘design patterns’ — established approaches for solving oft-encountered code problems. I certainly didn’t have a feel for how to approach fundamental application-design decisions.

Have you ever played ‘Top Trumps’? Well, in the web developer edition, my card would look something like this (marks out of 100):

  • CSS: 95
  • Copy and paste: 90
  • Hairline: 4
  • HTML: 90
  • JavaSript: 13

In addition to wanting to challenge myself on a technical level, I was also lacking in design chops.

With almost exclusively coding other peoples designs for the past decade, my visual design skills hadn’t had any real challenges since the late noughties. Reflecting on that fact and my puny JavaScript skills, cultivated a growing sense of professional inadequacy. It was time to address my shortcomings.

A personal challenge took form in my mind: to design and build a client-side JavaScript web application.

On Learning

There has never been more great resources to learn computing languages. Particularly JavaScript. However, it took me a while to find resources that explained things in a way that clicked. For me, Kyle Simpson’s ‘You Don’t Know JS’ and ‘Eloquent JavaScript’ by Marijn Haverbeke were a big help.

If you are beginning learning JavaScript, you will surely need to find your own gurus; people whose method of explaining works for you.

The first key thing I learned was that it’s pointless trying to learn from a teacher/resource that doesn’t explain things in a way you understand. Some people look at function examples with foo and bar in and instantly grok the meaning. I’m not one of those people. If you aren’t either, don’t assume programming languages aren’t for you. Just try a different resource and keep trying to apply the skills you are learning.

It’s also not a given that you will enjoy any kind of eureka moment where everything suddenly ‘clicks’; like the coding equivalent of love at first sight. It’s more likely it will take a lot of perseverance and considerable application of your learnings to feel confident.

As soon as you feel even a little competent, trying to apply your learning will teach you even more.

Here are some resources I found helpful along the way:

Right, that’s pretty much all you need to know about why I arrived at this point. The elephant now in the room is, why not use a framework?

Why Not React, Ember, Angular, Vue Et Al

Whilst the answer was alluded to at the beginning, I think the subject of why a framework wasn’t used needs expanding upon.

There are an abundance of high quality, well supported, JavaScript frameworks. Each specifically designed for the building of client-side web applications. Exactly the sort of thing I was looking to build. I forgive you for wondering the obvious: like, err, why not use one?

Here’s my stance on that. When you learn to use an abstraction, that’s primarily what you are learning — the abstraction. I wanted to learn the thing, not the abstraction of the thing.

I remember learning some jQuery back in the day. Whilst the lovely API let me make DOM manipulations easier than ever before I became powerless without it. I couldn’t even toggle classes on an element without needing jQuery. Task me with some basic interactivity on a page without jQuery to lean on and I stumbled about in my editor like a shorn Samson.

More recently, as I attempted to improve my understanding of JavaScript, I’d tried to wrap my head around Vue and React a little. But ultimately, I was never sure where standard JavaScript ended and React or Vue began. My opinion is that these abstractions are far more worthwhile when you understand what they are doing for you.

Therefore, if I was going to learn something I wanted to understand the core parts of the language. That way, I had some transferable skills. I wanted to retain something when the current flavor of the month framework had been cast aside for the next ‘hot new thing’.

Okay. Now, we’re caught up on why this app was getting made, and also, like it or not, how it would be made.

Let’s move on to what this thing was going to be.

An Application Idea

I needed an app idea. Nothing too ambitious; I didn’t have any delusions of creating a business start-up or appearing on Dragon’s Den — learning JavaScript and application basics was my primary goal.

The application needed to be something I had a fighting chance of pulling off technically and making a half-decent design job off to boot.

Tangent time.

Away from work, I organize and play indoor football whenever I can. As the organizer, it’s a pain to mentally note who has sent me a message to say they are playing and who hasn’t. 10 people are needed for a game typically, 8 at a push. There’s a roster of about 20 people who may or may not be able to play each game.

The app idea I settled on was something that enabled picking players from a roster, giving me a count of how many players had confirmed they could play.

As I thought about it more I felt I could broaden the scope a little more so that it could be used to organize any simple team-based activity.

Admittedly, I’d hardly dreamt up Google Earth. It did, however, have all the essential challenges: design, data management, interactivity, data storage, code organization.

Design-wise, I wouldn’t concern myself with anything other than a version that could run and work well on a phone viewport. I’d limit the design challenges to solving the problems on small screens only.

The core idea certainly leaned itself to ‘to-do’ style applications, of which there were heaps of existing examples to look at for inspiration whilst also having just enough difference to provide some unique design and coding challenges.

Intended Features

An initial bullet-point list of features I intended to design and code looked like this:

  • An input box to add people to the roster;
  • The ability to set each person to ‘in’ or ‘out’;
  • A tool that splits the people into teams, defaulting to 2 teams;
  • The ability to delete a person from the roster;
  • Some interface for ‘tools’. Besides splitting, available tools should include the ability to download the entered data as a file, upload previously saved data and delete-all players in one go;
  • The app should show a current count of how many people are ‘In’;
  • If there are no people selected for a game, it should hide the team splitter;
  • Pay mode. A toggle in settings that allows ‘in’ users to have an additional toggle to show whether they have paid or not.

At the outset, this is what I considered the features for a minimum viable product.

Design

Designs started on scraps of paper. It was illuminating (read: crushing) to find out just how many ideas which were incredible in my head turned out to be ludicrous when subjected to even the meagre scrutiny afforded by a pencil drawing.

Many ideas were therefore quickly ruled out, but the flip side was that by sketching some ideas out, it invariably led to other ideas I would never have otherwise considered.

Now, designers reading this will likely be like, “Duh, of course” but this was a real revelation to me. Developers are used to seeing later stage designs, rarely seeing all the abandoned steps along the way prior to that point.

Once happy with something as a pencil drawing, I’d try and re-create it in the design package, Sketch. Just as ideas fell away at the paper and pencil stage, an equal number failed to make it through the next fidelity stage of Sketch. The ones that seemed to hold up as artboards in Sketch were then chosen as the candidates to code out.

I’d find in turn that when those candidates were built-in code, a percentage also failed to work for varying reasons. Each fidelity step exposed new challenges for the design to either pass or fail. And a failure would lead me literally and figuratively back to the drawing board.

As such, ultimately, the design I ended up with is quite a bit different than the one I originally had in Sketch. Here are the first Sketch mockups:

Initial design of Who’s In application

Initial design of Who’s In application (Large preview)

Initial menu for Who’s In application

Initial menu for Who’s In application (Large preview)

Even then, I was under no delusions; it was a basic design. However, at this point I had something I was relatively confident could work and I was chomping at the bit to try and build it.

Technical Requirements

With some initial feature requirements and a basic visual direction, it was time to consider what should be achieved with the code.

Although received wisdom dictates that the way to make applications for iOS or Android devices is with native code, we have already established that my intention was to build the application with JavaScript.

I was also keen to ensure that the application ticked all the boxes necessary to qualify as a Progressive Web Application, or PWA as they are more commonly known.

On the off chance you are unaware what a Progressive Web Application is, here is the ‘elevator pitch’. Conceptually, just imagine a standard web application but one that meets some particular criteria. The adherence to this set of particular requirements means that a supporting device (think mobile phone) grants the web app special privileges, making the web application greater than the sum of its parts.

On Android, in particular, it can be near impossible to distinguish a PWA, built with just HTML, CSS and JavaScript, from an application built with native code.

Here is the Google checklist of requirements for an application to be considered a Progressive Web Application:

  • Site is served over HTTPS;
  • Pages are responsive on tablets & mobile devices;
  • All app URLs load while offline;
  • Metadata provided for Add to Home screen;
  • First load fast even on 3G;
  • Site works cross-browser;
  • Page transitions don’t feel like they block on the network;
  • Each page has a URL.

Now in addition, if you really want to be the teacher’s pet and have your application considered as an ‘Exemplary Progressive Web App’, then it should also meet the following requirements:

  • Site’s content is indexed by Google;
  • Schema.org metadata is provided where appropriate;
  • Social metadata is provided where appropriate;
  • Canonical URLs are provided when necessary;
  • Pages use the History API;
  • Content doesn’t jump as the page loads;
  • Pressing back from a detail page retains scroll position on the previous list page;
  • When tapped, inputs aren’t obscured by the on-screen keyboard;
  • Content is easily shareable from standalone or full-screen mode;
  • Site is responsive across phone, tablet and desktop screen sizes;
  • Any app install prompts are not used excessively;
  • The Add to Home Screen prompt is intercepted;
  • First load very fast even on 3G;
  • Site uses cache-first networking;
  • Site appropriately informs the user when they’re offline;
  • Provide context to the user about how notifications will be used;
  • UI encouraging users to turn on Push Notifications must not be overly aggressive;
  • Site dims the screen when the permission request is showing;
  • Push notifications must be timely, precise and relevant;
  • Provides controls to enable and disable notifications;
  • User is logged in across devices via Credential Management API;
  • User can pay easily via native UI from Payment Request API.

Crikey! I don’t know about you but that second bunch of stuff seems like a whole lot of work for a basic application! As it happens there are plenty of items there that aren’t relevant to what I had planned anyway. Despite that, I’m not ashamed to say I lowered my sights to only pass the initial tests.

For a whole section of application types, I believe a PWA is a more applicable solution than a native application. Where games and SaaS arguably make more sense in an app store, smaller utilities can live quite happily and more successfully on the web as Progressive Web Applications.

Whilst on the subject of me shirking hard work, another choice made early on was to try and store all data for the application on the users own device. That way it wouldn’t be necessary to hook up with data services and servers and deal with log-ins and authentications. For where my skills were at, figuring out authentication and storing user data seemed like it would almost certainly be biting off more than I could chew and overkill for the remit of the application!

Technology Choices

With a fairly clear idea on what the goal was, attention turned to the tools that could be employed to build it.

I decided early on to use TypeScript, which is described on its website as “… a typed superset of JavaScript that compiles to plain JavaScript.” What I’d seen and read of the language I liked, especially the fact it learned itself so well to static analysis.

Static analysis simply means a program can look at your code before running it (e.g. when it is static) and highlight problems. It can’t necessarily point out logical issues but it can point to non-conforming code against a set of rules.

Anything that could point out my (sure to be many) errors as I went along had to be a good thing, right?

If you are unfamiliar with TypeScript consider the following code in vanilla JavaScript:

console.log(`${count} players`);
let count = 0;

Run this code and you will get an error something like:

ReferenceError: Cannot access uninitialized variable.

For those with even a little JavaScript prowess, for this basic example, they don’t need a tool to tell them things won’t end well.

However, if you write that same code in TypeScript, this happens in the editor:

TypeScript in action

TypeScript in action (Large preview)

I’m getting some feedback on my idiocy before I even run the code! That’s the beauty of static analysis. This feedback was often like having a more experienced developer sat with me catching errors as I went.

TypeScript primarily, as the name implies, let’s you specify the ‘type’ expected for each thing in the code. This prevents you inadvertently ‘coercing’ one type to another. Or attempting to run a method on a piece of data that isn’t applicable — an array method on an object for example. This isn’t the sort of thing that necessarily results in an error when the code runs, but it can certainly introduce hard to track bugs. Thanks to TypeScript you get feedback in the editor before even attempting to run the code.

TypeScript was certainly not essential in this journey of discovery and I would never encourage anyone to jump on tools of this nature unless there was a clear benefit. Setting tools up and configuring tools in the first place can be a time sink so definitely consider their applicability before diving in.

There are other benefits afforded by TypeScript we will come to in the next article in this series but the static analysis capabilities were enough alone for me to want to adopt TypeScript.

There were knock-on considerations of the choices I was making. Opting to build the application as a Progressive Web Application meant I would need to understand Service Workers to some degree. Using TypeScript would mean introducing build tools of some sort. How would I manage those tools? Historically, I’d used NPM as a package manager but what about Yarn? Was it worth using Yarn instead? Being performance-focused would mean considering some minification or bundling tools; tools like webpack were becoming more and more popular and would need evaluating.

Summary

I’d recognized a need to embark on this quest. My JavaScript powers were weak and nothing girds the loins as much as attempting to put theory into practice. Deciding to build a web application with vanilla JavaScript was to be my baptism of fire.

I’d spent some time researching and considering the options for making the application and decided that making the application a Progressive Web App made the most sense for my skill-set and the relative simplicity of the idea.

I’d need build tools, a package manager, and subsequently, a whole lot of patience.

Ultimately, at this point the fundamental question remained: was this something I could actually manage? Or would I be humbled by my own ineptitude?

I hope you join me in part two when you can read about build tools, JavaScript design patterns and how to make something more ‘app-like’.

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Designing And Building A Progressive Web Application Without A Framework (Part 1)

Collective #534

dreamt up by webguru in Uncategorized | Comments Off on Collective #534




C534_flawwwless

Flawwwless

Flawwwless is a coding education platform with easy to follow interactive tutorials.

Check it out



C534_docs

ImportDoc

With ImportDoc you can create a web page that updates dynamically with the content of a Google document.

Check it out



C534_gatsbythemes

Gatsby Themes

Exciting news for Gatsby users: With a Gatsby theme, all of your default configuration (shared functionality, data sourcing, design) is abstracted out of your site, and into an installable package.

Check it out


C534_fwa

The Cool Club FWA

The web presentation of The Cool Club FWA, a deck of cards displaying the 54 coolest websites in history, as featured in “Web Design, The evolution of the Digital World 1990-today”.

Check it out














C534_spreadsheets

Stein

With Stein you can use Google Sheets as your no-setup data store.

Check it out


Collective #534 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #534

The Essential Guide To JavaScript’s Newest Data Type: BigInt

dreamt up by webguru in Uncategorized | Comments Off on The Essential Guide To JavaScript’s Newest Data Type: BigInt

The Essential Guide To JavaScript’s Newest Data Type: BigInt

The Essential Guide To JavaScript’s Newest Data Type: BigInt

Faraz Kelhini



The BigInt data type aims to enable JavaScript programmers to represent integer values larger than the range supported by the Number data type. The ability to represent integers with arbitrary precision is particularly important when performing mathematical operations on large integers. With BigInt, integer overflow will no longer be an issue.

Additionally, you can safely work with high-resolution timestamps, large integer IDs, and more without having to use a workaround. BigInt is currently a stage 3 proposal. Once added to the specification, it will become the second numeric data type in JavaScript, which will bring the total number of supported data types to eight:

  • Boolean
  • Null
  • Undefined
  • Number
  • BigInt
  • String
  • Symbol
  • Object

In this article, we will take a good look at BigInt and see how it can help overcome the limitations of the Number type in JavaScript.

The Problem

The lack of an explicit integer type in JavaScript is often baffling to programmers coming from other languages. Many programming languages support multiple numeric types such as float, double, integer, and bignum, but that’s not the case with JavaScript. In JavaScript, all numbers are represented in double-precision 64-bit floating-point format as defined by the IEEE 754-2008 standard.

Under this standard, very large integers that cannot be exactly represented are automatically rounded. To be precise, the Number type in JavaScript can only safely represent integers between -9007199254740991 (-(253-1)) and 9007199254740991 (253-1). Any integer value that falls out of this range may lose precision.

This can be easily examined by executing the following code:

console.log(9999999999999999);    // → 10000000000000000

This integer is larger than the largest number JavaScript can reliably represent with the Number primitive. Therefore, it’s rounded. Unexpected rounding can compromise a program’s reliability and security. Here’s another example:

// notice the last digits
9007199254740992 === 9007199254740993;    // → true

JavaScript provides the Number.MAX_SAFE_INTEGER constant that allows you to quickly obtain the maximum safe integer in JavaScript. Similarly, you can obtain the minimum safe integer by using the Number.MIN_SAFE_INTEGER constant:

const minInt = Number.MIN_SAFE_INTEGER;

console.log(minInt);         // → -9007199254740991

console.log(minInt - 5);     // → -9007199254740996

// notice how this outputs the same value as above
console.log(minInt - 4);     // → -9007199254740996

The Solution

As a workaround to these limitations, some JavaScript developers represent large integers using the String type. The Twitter API, for example, adds a string version of IDs to objects when responding with JSON. Additionally, a number of libraries such as bignumber.js have been developed to make working with large integers easier.

With BigInt, applications no longer need a workaround or library to safely represent integers beyond Number.MAX_SAFE_INTEGER and Number.Min_SAFE_INTEGER. Arithmetic operations on large integers can now be performed in standard JavaScript without risking loss of precision. The added benefit of using a native data type over a third-party library is better run-time performance.

To create a BigInt, simply append n to the end of an integer. Compare:

console.log(9007199254740995n);    // → 9007199254740995n
console.log(9007199254740995);     // → 9007199254740996

Alternatively, you can call the BigInt() constructor:

BigInt("9007199254740995");    // → 9007199254740995n

BigInt literals can also be written in binary, octal or hexadecimal notation:


// binary
console.log(0b100000000000000000000000000000000000000000000000000011n);
// → 9007199254740995n

// hex
console.log(0x20000000000003n);
// → 9007199254740995n

// octal
console.log(0o400000000000000003n);
// → 9007199254740995n

// note that legacy octal syntax is not supported
console.log(0400000000000000003n);
// → SyntaxError

Keep in mind that you can’t use the strict equality operator to compare a BigInt to a regular number because they are not of the same type:

console.log(10n === 10);    // → false

console.log(typeof 10n);    // → bigint
console.log(typeof 10);     // → number

Instead, you can use the equality operator, which performs implicit type conversion before compering its operands:

console.log(10n == 10);    // → true

All arithmetic operators can be used on BigInts except for the unary plus (+) operator:

10n + 20n;    // → 30n
10n - 20n;    // → -10n
+10n;         // → TypeError: Cannot convert a BigInt value to a number
-10n;         // → -10n
10n * 20n;    // → 200n
20n / 10n;    // → 2n
23n % 10n;    // → 3n
10n ** 3n;    // → 1000n

const x = 10n;
++x;          // → 11n
--x;          // → 9n

The reason that the unary plus (+) operator is not supported is that some programs may rely on the invariant that + always produces a Number, or throws an exception. Changing the behavior of + would also break asm.js code.

Naturally, when used with BigInt operands, arithmetic operators are expected to return a BigInt value. Therefore, the result of the division (/) operator is automatically rounded down to the nearest integer. For example:

25 / 10;      // → 2.5
25n / 10n;    // → 2n

Implicit Type Conversion

Because implicit type conversion could lose information, mixed operations between BigInts and Numbers are not allowed. When mixing large integers and floating-point numbers, the resulting value may not be accurately representable by BigInt or Number. Consider the following example:

(9007199254740992n + 1n) + 0.5

The result of this expression is outside of the domain of both BigInt and Number. A Number with a fractional part cannot be accurately converted to a BigInt. And a BigInt larger than 253 cannot be accurately converted to a Number.

As a result of this restriction, it’s not possible to perform arithmetic operations with a mix of Number and BigInt operands. You also cannot pass a BigInt to Web APIs and built-in JavaScript functions that expect a Number. Attempting to do so will cause a TypeError:

10 + 10n;    // → TypeError
Math.max(2n, 4n, 6n);    // → TypeError

Note that relational operators do not follow this rule, as shown in this example:

10n > 5;    // → true

If you want to perform arithmetic computations with BigInt and Number, you first need to determine the domain in which the operation should be done. To do that, simply convert either of the operands by calling Number() or BigInt():

BigInt(10) + 10n;    // → 20n
// or
10 + Number(10n);    // → 20

When encountered in a Boolean context, BigInt is treated similar to Number. In other words, a BigInt is considered a truthy value as long as it’s not 0n:

if (5n) {
    // this code block will be executed
}

if (0n) {
    // but this code block won't
}

No implicit type conversion occurs when sorting an array of BigInts and Numbers:

const arr = [3n, 4, 2, 1n, 0, -1n];

arr.sort();    // → [-1n, 0, 1n, 2, 3n, 4]

Bitwise operators such as |, &, <<, >>, and ^ operate on BigInts in a similar way to Numbers. Negative numbers are interpreted as infinite-length two’s complement. Mixed operands are not allowed. Here are some examples:

90 | 115;      // → 123
90n | 115n;    // → 123n
90n | 115;     // → TypeError

The BigInt Constructor

As with other primitive types, a BigInt can be created using a constructor function. The argument passed to BigInt() is automatically converted to a BigInt, if possible:

BigInt("10");    // → 10n
BigInt(10);      // → 10n
BigInt(true);    // → 1n

Data types and values that cannot be converted throw an exception:

BigInt(10.2);     // → RangeError
BigInt(null);     // → TypeError
BigInt("abc");    // → SyntaxError

You can directly perform arithmetic operations on a BigInt created using a constructor:

BigInt(10) * 10n;    // → 100n

When used as operands of the strict equality operator, BigInts created using a constructor are treated similar to regular ones:

BigInt(true) === 1n;    // → true

Library Functions

JavaScript provides two library functions for representing BigInt values as signed or unsigned integers:

  • BigInt.asUintN(width, BigInt): wraps a BigInt between 0 and 2width-1
  • BigInt.asIntN(width, BigInt): wraps a BigInt between -2width-1 and 2width-1-1

These functions are particularly useful when performing 64-bit arithmetic operations. This way you can stay within the intended range.

Browser Support And Transpiling

At the time of this writing, Chrome +67 and Opera +54 fully support the BigInt data type. Unfortunately, Edge and Safari haven’t implemented it yet. Firefox doesn’t support BigInt by default, but it can be enabled by setting javascript.options.bigint to true in about:config. An up-to-date list of supported browsers is available on Can I use….

Unluckily, transpiling BigInt is an extremely complicated process, which incurs hefty run-time performance penalty. It’s also impossible to directly polyfill BigInt because the proposal changes the behavior of several existing operators. For now, a better alternative is to use the JSBI library, which is a pure-JavaScript implementation of the BigInt proposal.

This library provides an API that behaves exactly the same as the native BigInt. Here’s how you can use JSBI:

import JSBI from './jsbi.mjs';

const b1 = JSBI.BigInt(Number.MAX_SAFE_INTEGER);
const b2 = JSBI.BigInt('10');

const result = JSBI.add(b1, b2);

console.log(String(result));    // → '9007199254741001'

An advantage of using JSBI is that once browser support improves, you won’t need to rewrite your code. Instead, you can automatically compile your JSBI code into native BigInt code by using a babel plugin. Furthermore, the performance of JSBI is on par with native BigInt implementations. You can expect wider browser support for BigInt soon.

Conclusion

BigInt is a new data type intended for use when integer values are larger than the range supported by the Number data type. This data type allows us to safely perform arithmetic operations on large integers, represent high-resolution timestamps, use large integer IDs, and more without the need to use a library.

It’s important to keep in mind that you cannot perform arithmetic operations with a mix of Number and BigInt operands. You’ll need to determine the domain in which the operation should be done by explicitly converting either of the operands. Moreover, for compatibility reasons, you are not allowed to use the unary plus (+) operator on a BigInt.

What do you think? Do you find BigInt useful? Let us know in the comments!

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, The Essential Guide To JavaScript’s Newest Data Type: BigInt