Collective #550

Inspirational Website of the Week: Déplacé Maison A refreshing design with lots of character and perfect details. Our pick this week. Check it out Where to put buttons on forms Button placement can make or break a form. Find out the Read more

Designing And Building A Progressive Web Application Without A Framework (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on Designing And Building A Progressive Web Application Without A Framework (Part 1)

Designing And Building A Progressive Web Application Without A Framework (Part 1)

Designing And Building A Progressive Web Application Without A Framework (Part 1)

Ben Frain

How does a web application actually work? I don’t mean from the end-user point of view. I mean in the technical sense. How does a web application actually run? What kicks things off? Without any boilerplate code, what’s the right way to structure an application? Particularly a client-side application where all the logic runs on the end-users device. How does data get managed and manipulated? How do you make the interface react to changes in the data?

These are the kind of questions that are simple to side-step or ignore entirely with a framework. Developers reach for something like React, Vue, Ember or Angular, follow the documentation to get up and running and away they go. Those problems are handled by the framework’s box of tricks.

That may be exactly how you want things. Arguably, it’s the smart thing to do if you want to build something to a professional standard. However, with the magic abstracted away, you never get to learn how the tricks are actually performed.

Don’t you want to know how the tricks are done?

I did. So, I decided to try building a basic client-side application, sans-framework, to understand these problems for myself.

But, I’m getting a little ahead of myself; a little background first.

Before starting this journey I considered myself highly proficient at HTML and CSS but not JavaScript. As I felt I’d solved the biggest questions I had of CSS to my satisfaction, the next challenge I set myself was understanding a programming language.

The fact was, I was relatively beginner-level with JavaScript. And, aside from hacking the PHP of WordPress around, I had no exposure or training in any other programming language either.

Let me qualify that ‘beginner-level’ assertion. Sure, I could get interactivity working on a page. Toggle classes, create DOM nodes, append and move them around, etc. But when it came to organizing the code for anything beyond that I was pretty clueless. I wasn’t confident building anything approaching an application. I had no idea how to define a set of data in JavaScipt, let alone manipulate it with functions.

I had no understanding of JavaScript ‘design patterns’ — established approaches for solving oft-encountered code problems. I certainly didn’t have a feel for how to approach fundamental application-design decisions.

Have you ever played ‘Top Trumps’? Well, in the web developer edition, my card would look something like this (marks out of 100):

  • CSS: 95
  • Copy and paste: 90
  • Hairline: 4
  • HTML: 90
  • JavaSript: 13

In addition to wanting to challenge myself on a technical level, I was also lacking in design chops.

With almost exclusively coding other peoples designs for the past decade, my visual design skills hadn’t had any real challenges since the late noughties. Reflecting on that fact and my puny JavaScript skills, cultivated a growing sense of professional inadequacy. It was time to address my shortcomings.

A personal challenge took form in my mind: to design and build a client-side JavaScript web application.

On Learning

There has never been more great resources to learn computing languages. Particularly JavaScript. However, it took me a while to find resources that explained things in a way that clicked. For me, Kyle Simpson’s ‘You Don’t Know JS’ and ‘Eloquent JavaScript’ by Marijn Haverbeke were a big help.

If you are beginning learning JavaScript, you will surely need to find your own gurus; people whose method of explaining works for you.

The first key thing I learned was that it’s pointless trying to learn from a teacher/resource that doesn’t explain things in a way you understand. Some people look at function examples with foo and bar in and instantly grok the meaning. I’m not one of those people. If you aren’t either, don’t assume programming languages aren’t for you. Just try a different resource and keep trying to apply the skills you are learning.

It’s also not a given that you will enjoy any kind of eureka moment where everything suddenly ‘clicks’; like the coding equivalent of love at first sight. It’s more likely it will take a lot of perseverance and considerable application of your learnings to feel confident.

As soon as you feel even a little competent, trying to apply your learning will teach you even more.

Here are some resources I found helpful along the way:

Right, that’s pretty much all you need to know about why I arrived at this point. The elephant now in the room is, why not use a framework?

Why Not React, Ember, Angular, Vue Et Al

Whilst the answer was alluded to at the beginning, I think the subject of why a framework wasn’t used needs expanding upon.

There are an abundance of high quality, well supported, JavaScript frameworks. Each specifically designed for the building of client-side web applications. Exactly the sort of thing I was looking to build. I forgive you for wondering the obvious: like, err, why not use one?

Here’s my stance on that. When you learn to use an abstraction, that’s primarily what you are learning — the abstraction. I wanted to learn the thing, not the abstraction of the thing.

I remember learning some jQuery back in the day. Whilst the lovely API let me make DOM manipulations easier than ever before I became powerless without it. I couldn’t even toggle classes on an element without needing jQuery. Task me with some basic interactivity on a page without jQuery to lean on and I stumbled about in my editor like a shorn Samson.

More recently, as I attempted to improve my understanding of JavaScript, I’d tried to wrap my head around Vue and React a little. But ultimately, I was never sure where standard JavaScript ended and React or Vue began. My opinion is that these abstractions are far more worthwhile when you understand what they are doing for you.

Therefore, if I was going to learn something I wanted to understand the core parts of the language. That way, I had some transferable skills. I wanted to retain something when the current flavor of the month framework had been cast aside for the next ‘hot new thing’.

Okay. Now, we’re caught up on why this app was getting made, and also, like it or not, how it would be made.

Let’s move on to what this thing was going to be.

An Application Idea

I needed an app idea. Nothing too ambitious; I didn’t have any delusions of creating a business start-up or appearing on Dragon’s Den — learning JavaScript and application basics was my primary goal.

The application needed to be something I had a fighting chance of pulling off technically and making a half-decent design job off to boot.

Tangent time.

Away from work, I organize and play indoor football whenever I can. As the organizer, it’s a pain to mentally note who has sent me a message to say they are playing and who hasn’t. 10 people are needed for a game typically, 8 at a push. There’s a roster of about 20 people who may or may not be able to play each game.

The app idea I settled on was something that enabled picking players from a roster, giving me a count of how many players had confirmed they could play.

As I thought about it more I felt I could broaden the scope a little more so that it could be used to organize any simple team-based activity.

Admittedly, I’d hardly dreamt up Google Earth. It did, however, have all the essential challenges: design, data management, interactivity, data storage, code organization.

Design-wise, I wouldn’t concern myself with anything other than a version that could run and work well on a phone viewport. I’d limit the design challenges to solving the problems on small screens only.

The core idea certainly leaned itself to ‘to-do’ style applications, of which there were heaps of existing examples to look at for inspiration whilst also having just enough difference to provide some unique design and coding challenges.

Intended Features

An initial bullet-point list of features I intended to design and code looked like this:

  • An input box to add people to the roster;
  • The ability to set each person to ‘in’ or ‘out’;
  • A tool that splits the people into teams, defaulting to 2 teams;
  • The ability to delete a person from the roster;
  • Some interface for ‘tools’. Besides splitting, available tools should include the ability to download the entered data as a file, upload previously saved data and delete-all players in one go;
  • The app should show a current count of how many people are ‘In’;
  • If there are no people selected for a game, it should hide the team splitter;
  • Pay mode. A toggle in settings that allows ‘in’ users to have an additional toggle to show whether they have paid or not.

At the outset, this is what I considered the features for a minimum viable product.


Designs started on scraps of paper. It was illuminating (read: crushing) to find out just how many ideas which were incredible in my head turned out to be ludicrous when subjected to even the meagre scrutiny afforded by a pencil drawing.

Many ideas were therefore quickly ruled out, but the flip side was that by sketching some ideas out, it invariably led to other ideas I would never have otherwise considered.

Now, designers reading this will likely be like, “Duh, of course” but this was a real revelation to me. Developers are used to seeing later stage designs, rarely seeing all the abandoned steps along the way prior to that point.

Once happy with something as a pencil drawing, I’d try and re-create it in the design package, Sketch. Just as ideas fell away at the paper and pencil stage, an equal number failed to make it through the next fidelity stage of Sketch. The ones that seemed to hold up as artboards in Sketch were then chosen as the candidates to code out.

I’d find in turn that when those candidates were built-in code, a percentage also failed to work for varying reasons. Each fidelity step exposed new challenges for the design to either pass or fail. And a failure would lead me literally and figuratively back to the drawing board.

As such, ultimately, the design I ended up with is quite a bit different than the one I originally had in Sketch. Here are the first Sketch mockups:

Initial design of Who’s In application

Initial design of Who’s In application (Large preview)

Initial menu for Who’s In application

Initial menu for Who’s In application (Large preview)

Even then, I was under no delusions; it was a basic design. However, at this point I had something I was relatively confident could work and I was chomping at the bit to try and build it.

Technical Requirements

With some initial feature requirements and a basic visual direction, it was time to consider what should be achieved with the code.

Although received wisdom dictates that the way to make applications for iOS or Android devices is with native code, we have already established that my intention was to build the application with JavaScript.

I was also keen to ensure that the application ticked all the boxes necessary to qualify as a Progressive Web Application, or PWA as they are more commonly known.

On the off chance you are unaware what a Progressive Web Application is, here is the ‘elevator pitch’. Conceptually, just imagine a standard web application but one that meets some particular criteria. The adherence to this set of particular requirements means that a supporting device (think mobile phone) grants the web app special privileges, making the web application greater than the sum of its parts.

On Android, in particular, it can be near impossible to distinguish a PWA, built with just HTML, CSS and JavaScript, from an application built with native code.

Here is the Google checklist of requirements for an application to be considered a Progressive Web Application:

  • Site is served over HTTPS;
  • Pages are responsive on tablets & mobile devices;
  • All app URLs load while offline;
  • Metadata provided for Add to Home screen;
  • First load fast even on 3G;
  • Site works cross-browser;
  • Page transitions don’t feel like they block on the network;
  • Each page has a URL.

Now in addition, if you really want to be the teacher’s pet and have your application considered as an ‘Exemplary Progressive Web App’, then it should also meet the following requirements:

  • Site’s content is indexed by Google;
  • metadata is provided where appropriate;
  • Social metadata is provided where appropriate;
  • Canonical URLs are provided when necessary;
  • Pages use the History API;
  • Content doesn’t jump as the page loads;
  • Pressing back from a detail page retains scroll position on the previous list page;
  • When tapped, inputs aren’t obscured by the on-screen keyboard;
  • Content is easily shareable from standalone or full-screen mode;
  • Site is responsive across phone, tablet and desktop screen sizes;
  • Any app install prompts are not used excessively;
  • The Add to Home Screen prompt is intercepted;
  • First load very fast even on 3G;
  • Site uses cache-first networking;
  • Site appropriately informs the user when they’re offline;
  • Provide context to the user about how notifications will be used;
  • UI encouraging users to turn on Push Notifications must not be overly aggressive;
  • Site dims the screen when the permission request is showing;
  • Push notifications must be timely, precise and relevant;
  • Provides controls to enable and disable notifications;
  • User is logged in across devices via Credential Management API;
  • User can pay easily via native UI from Payment Request API.

Crikey! I don’t know about you but that second bunch of stuff seems like a whole lot of work for a basic application! As it happens there are plenty of items there that aren’t relevant to what I had planned anyway. Despite that, I’m not ashamed to say I lowered my sights to only pass the initial tests.

For a whole section of application types, I believe a PWA is a more applicable solution than a native application. Where games and SaaS arguably make more sense in an app store, smaller utilities can live quite happily and more successfully on the web as Progressive Web Applications.

Whilst on the subject of me shirking hard work, another choice made early on was to try and store all data for the application on the users own device. That way it wouldn’t be necessary to hook up with data services and servers and deal with log-ins and authentications. For where my skills were at, figuring out authentication and storing user data seemed like it would almost certainly be biting off more than I could chew and overkill for the remit of the application!

Technology Choices

With a fairly clear idea on what the goal was, attention turned to the tools that could be employed to build it.

I decided early on to use TypeScript, which is described on its website as “… a typed superset of JavaScript that compiles to plain JavaScript.” What I’d seen and read of the language I liked, especially the fact it learned itself so well to static analysis.

Static analysis simply means a program can look at your code before running it (e.g. when it is static) and highlight problems. It can’t necessarily point out logical issues but it can point to non-conforming code against a set of rules.

Anything that could point out my (sure to be many) errors as I went along had to be a good thing, right?

If you are unfamiliar with TypeScript consider the following code in vanilla JavaScript:

console.log(`${count} players`);
let count = 0;

Run this code and you will get an error something like:

ReferenceError: Cannot access uninitialized variable.

For those with even a little JavaScript prowess, for this basic example, they don’t need a tool to tell them things won’t end well.

However, if you write that same code in TypeScript, this happens in the editor:

TypeScript in action

TypeScript in action (Large preview)

I’m getting some feedback on my idiocy before I even run the code! That’s the beauty of static analysis. This feedback was often like having a more experienced developer sat with me catching errors as I went.

TypeScript primarily, as the name implies, let’s you specify the ‘type’ expected for each thing in the code. This prevents you inadvertently ‘coercing’ one type to another. Or attempting to run a method on a piece of data that isn’t applicable — an array method on an object for example. This isn’t the sort of thing that necessarily results in an error when the code runs, but it can certainly introduce hard to track bugs. Thanks to TypeScript you get feedback in the editor before even attempting to run the code.

TypeScript was certainly not essential in this journey of discovery and I would never encourage anyone to jump on tools of this nature unless there was a clear benefit. Setting tools up and configuring tools in the first place can be a time sink so definitely consider their applicability before diving in.

There are other benefits afforded by TypeScript we will come to in the next article in this series but the static analysis capabilities were enough alone for me to want to adopt TypeScript.

There were knock-on considerations of the choices I was making. Opting to build the application as a Progressive Web Application meant I would need to understand Service Workers to some degree. Using TypeScript would mean introducing build tools of some sort. How would I manage those tools? Historically, I’d used NPM as a package manager but what about Yarn? Was it worth using Yarn instead? Being performance-focused would mean considering some minification or bundling tools; tools like webpack were becoming more and more popular and would need evaluating.


I’d recognized a need to embark on this quest. My JavaScript powers were weak and nothing girds the loins as much as attempting to put theory into practice. Deciding to build a web application with vanilla JavaScript was to be my baptism of fire.

I’d spent some time researching and considering the options for making the application and decided that making the application a Progressive Web App made the most sense for my skill-set and the relative simplicity of the idea.

I’d need build tools, a package manager, and subsequently, a whole lot of patience.

Ultimately, at this point the fundamental question remained: was this something I could actually manage? Or would I be humbled by my own ineptitude?

I hope you join me in part two when you can read about build tools, JavaScript design patterns and how to make something more ‘app-like’.

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Designing And Building A Progressive Web Application Without A Framework (Part 1)

Collective #534

dreamt up by webguru in Uncategorized | Comments Off on Collective #534



Flawwwless is a coding education platform with easy to follow interactive tutorials.

Check it out



With ImportDoc you can create a web page that updates dynamically with the content of a Google document.

Check it out


Gatsby Themes

Exciting news for Gatsby users: With a Gatsby theme, all of your default configuration (shared functionality, data sourcing, design) is abstracted out of your site, and into an installable package.

Check it out


The Cool Club FWA

The web presentation of The Cool Club FWA, a deck of cards displaying the 54 coolest websites in history, as featured in “Web Design, The evolution of the Digital World 1990-today”.

Check it out



With Stein you can use Google Sheets as your no-setup data store.

Check it out

Collective #534 was written by Pedro Botelho and published on Codrops.

Source: Codrops, Collective #534

The Essential Guide To JavaScript’s Newest Data Type: BigInt

dreamt up by webguru in Uncategorized | Comments Off on The Essential Guide To JavaScript’s Newest Data Type: BigInt

The Essential Guide To JavaScript’s Newest Data Type: BigInt

The Essential Guide To JavaScript’s Newest Data Type: BigInt

Faraz Kelhini

The BigInt data type aims to enable JavaScript programmers to represent integer values larger than the range supported by the Number data type. The ability to represent integers with arbitrary precision is particularly important when performing mathematical operations on large integers. With BigInt, integer overflow will no longer be an issue.

Additionally, you can safely work with high-resolution timestamps, large integer IDs, and more without having to use a workaround. BigInt is currently a stage 3 proposal. Once added to the specification, it will become the second numeric data type in JavaScript, which will bring the total number of supported data types to eight:

  • Boolean
  • Null
  • Undefined
  • Number
  • BigInt
  • String
  • Symbol
  • Object

In this article, we will take a good look at BigInt and see how it can help overcome the limitations of the Number type in JavaScript.

The Problem

The lack of an explicit integer type in JavaScript is often baffling to programmers coming from other languages. Many programming languages support multiple numeric types such as float, double, integer, and bignum, but that’s not the case with JavaScript. In JavaScript, all numbers are represented in double-precision 64-bit floating-point format as defined by the IEEE 754-2008 standard.

Under this standard, very large integers that cannot be exactly represented are automatically rounded. To be precise, the Number type in JavaScript can only safely represent integers between -9007199254740991 (-(253-1)) and 9007199254740991 (253-1). Any integer value that falls out of this range may lose precision.

This can be easily examined by executing the following code:

console.log(9999999999999999);    // → 10000000000000000

This integer is larger than the largest number JavaScript can reliably represent with the Number primitive. Therefore, it’s rounded. Unexpected rounding can compromise a program’s reliability and security. Here’s another example:

// notice the last digits
9007199254740992 === 9007199254740993;    // → true

JavaScript provides the Number.MAX_SAFE_INTEGER constant that allows you to quickly obtain the maximum safe integer in JavaScript. Similarly, you can obtain the minimum safe integer by using the Number.MIN_SAFE_INTEGER constant:

const minInt = Number.MIN_SAFE_INTEGER;

console.log(minInt);         // → -9007199254740991

console.log(minInt - 5);     // → -9007199254740996

// notice how this outputs the same value as above
console.log(minInt - 4);     // → -9007199254740996

The Solution

As a workaround to these limitations, some JavaScript developers represent large integers using the String type. The Twitter API, for example, adds a string version of IDs to objects when responding with JSON. Additionally, a number of libraries such as bignumber.js have been developed to make working with large integers easier.

With BigInt, applications no longer need a workaround or library to safely represent integers beyond Number.MAX_SAFE_INTEGER and Number.Min_SAFE_INTEGER. Arithmetic operations on large integers can now be performed in standard JavaScript without risking loss of precision. The added benefit of using a native data type over a third-party library is better run-time performance.

To create a BigInt, simply append n to the end of an integer. Compare:

console.log(9007199254740995n);    // → 9007199254740995n
console.log(9007199254740995);     // → 9007199254740996

Alternatively, you can call the BigInt() constructor:

BigInt("9007199254740995");    // → 9007199254740995n

BigInt literals can also be written in binary, octal or hexadecimal notation:

// binary
// → 9007199254740995n

// hex
// → 9007199254740995n

// octal
// → 9007199254740995n

// note that legacy octal syntax is not supported
// → SyntaxError

Keep in mind that you can’t use the strict equality operator to compare a BigInt to a regular number because they are not of the same type:

console.log(10n === 10);    // → false

console.log(typeof 10n);    // → bigint
console.log(typeof 10);     // → number

Instead, you can use the equality operator, which performs implicit type conversion before compering its operands:

console.log(10n == 10);    // → true

All arithmetic operators can be used on BigInts except for the unary plus (+) operator:

10n + 20n;    // → 30n
10n - 20n;    // → -10n
+10n;         // → TypeError: Cannot convert a BigInt value to a number
-10n;         // → -10n
10n * 20n;    // → 200n
20n / 10n;    // → 2n
23n % 10n;    // → 3n
10n ** 3n;    // → 1000n

const x = 10n;
++x;          // → 11n
--x;          // → 9n

The reason that the unary plus (+) operator is not supported is that some programs may rely on the invariant that + always produces a Number, or throws an exception. Changing the behavior of + would also break asm.js code.

Naturally, when used with BigInt operands, arithmetic operators are expected to return a BigInt value. Therefore, the result of the division (/) operator is automatically rounded down to the nearest integer. For example:

25 / 10;      // → 2.5
25n / 10n;    // → 2n

Implicit Type Conversion

Because implicit type conversion could lose information, mixed operations between BigInts and Numbers are not allowed. When mixing large integers and floating-point numbers, the resulting value may not be accurately representable by BigInt or Number. Consider the following example:

(9007199254740992n + 1n) + 0.5

The result of this expression is outside of the domain of both BigInt and Number. A Number with a fractional part cannot be accurately converted to a BigInt. And a BigInt larger than 253 cannot be accurately converted to a Number.

As a result of this restriction, it’s not possible to perform arithmetic operations with a mix of Number and BigInt operands. You also cannot pass a BigInt to Web APIs and built-in JavaScript functions that expect a Number. Attempting to do so will cause a TypeError:

10 + 10n;    // → TypeError
Math.max(2n, 4n, 6n);    // → TypeError

Note that relational operators do not follow this rule, as shown in this example:

10n > 5;    // → true

If you want to perform arithmetic computations with BigInt and Number, you first need to determine the domain in which the operation should be done. To do that, simply convert either of the operands by calling Number() or BigInt():

BigInt(10) + 10n;    // → 20n
// or
10 + Number(10n);    // → 20

When encountered in a Boolean context, BigInt is treated similar to Number. In other words, a BigInt is considered a truthy value as long as it’s not 0n:

if (5n) {
    // this code block will be executed

if (0n) {
    // but this code block won't

No implicit type conversion occurs when sorting an array of BigInts and Numbers:

const arr = [3n, 4, 2, 1n, 0, -1n];

arr.sort();    // → [-1n, 0, 1n, 2, 3n, 4]

Bitwise operators such as |, &, <<, >>, and ^ operate on BigInts in a similar way to Numbers. Negative numbers are interpreted as infinite-length two’s complement. Mixed operands are not allowed. Here are some examples:

90 | 115;      // → 123
90n | 115n;    // → 123n
90n | 115;     // → TypeError

The BigInt Constructor

As with other primitive types, a BigInt can be created using a constructor function. The argument passed to BigInt() is automatically converted to a BigInt, if possible:

BigInt("10");    // → 10n
BigInt(10);      // → 10n
BigInt(true);    // → 1n

Data types and values that cannot be converted throw an exception:

BigInt(10.2);     // → RangeError
BigInt(null);     // → TypeError
BigInt("abc");    // → SyntaxError

You can directly perform arithmetic operations on a BigInt created using a constructor:

BigInt(10) * 10n;    // → 100n

When used as operands of the strict equality operator, BigInts created using a constructor are treated similar to regular ones:

BigInt(true) === 1n;    // → true

Library Functions

JavaScript provides two library functions for representing BigInt values as signed or unsigned integers:

  • BigInt.asUintN(width, BigInt): wraps a BigInt between 0 and 2width-1
  • BigInt.asIntN(width, BigInt): wraps a BigInt between -2width-1 and 2width-1-1

These functions are particularly useful when performing 64-bit arithmetic operations. This way you can stay within the intended range.

Browser Support And Transpiling

At the time of this writing, Chrome +67 and Opera +54 fully support the BigInt data type. Unfortunately, Edge and Safari haven’t implemented it yet. Firefox doesn’t support BigInt by default, but it can be enabled by setting javascript.options.bigint to true in about:config. An up-to-date list of supported browsers is available on Can I use….

Unluckily, transpiling BigInt is an extremely complicated process, which incurs hefty run-time performance penalty. It’s also impossible to directly polyfill BigInt because the proposal changes the behavior of several existing operators. For now, a better alternative is to use the JSBI library, which is a pure-JavaScript implementation of the BigInt proposal.

This library provides an API that behaves exactly the same as the native BigInt. Here’s how you can use JSBI:

import JSBI from './jsbi.mjs';

const b1 = JSBI.BigInt(Number.MAX_SAFE_INTEGER);
const b2 = JSBI.BigInt('10');

const result = JSBI.add(b1, b2);

console.log(String(result));    // → '9007199254741001'

An advantage of using JSBI is that once browser support improves, you won’t need to rewrite your code. Instead, you can automatically compile your JSBI code into native BigInt code by using a babel plugin. Furthermore, the performance of JSBI is on par with native BigInt implementations. You can expect wider browser support for BigInt soon.


BigInt is a new data type intended for use when integer values are larger than the range supported by the Number data type. This data type allows us to safely perform arithmetic operations on large integers, represent high-resolution timestamps, use large integer IDs, and more without the need to use a library.

It’s important to keep in mind that you cannot perform arithmetic operations with a mix of Number and BigInt operands. You’ll need to determine the domain in which the operation should be done by explicitly converting either of the operands. Moreover, for compatibility reasons, you are not allowed to use the unary plus (+) operator on a BigInt.

What do you think? Do you find BigInt useful? Let us know in the comments!

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, The Essential Guide To JavaScript’s Newest Data Type: BigInt

Monthly Web Development Update 7/2019: Modern Techniques And Good Trouble

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 7/2019: Modern Techniques And Good Trouble

Monthly Web Development Update 7/2019: Modern Techniques And Good Trouble

Monthly Web Development Update 7/2019: Modern Techniques And Good Trouble

Anselm Hannemann

What can we do to cause “good trouble”? First of all, I think it needs to be friendly, helpful and meaningful actions that don’t impact other peoples’ lives. Secondly, it’s something we strongly believe in — it might be using simpler JavaScript methods, reducing the application size, a better toggle UI, publishing a book or building a business without selling user data to others. Whatever it is, it’s good to have a standpoint of view and talk about it.

It’s good to advocate others about accessibility problems, about how to listen better to others in a conversation, how to manage projects, products or even a company better. The most important thing on all these actions is to remember that they are helping other people and not impacting them as well as animals or our environment in general.

Doing something useful — as small as it might seem — is always a good thing. And don’t forget to honor your action just by smiling and being thankful for what you did!


  • Chrome 76 removes a couple of things like feature policy: lazyload, insecure usage of DeviceMotionEvent and the DeviceOrientationEvent. If you use them, please ensure you use a secure context by now or replace them by their successors.
  • Firefox 68 is out and this is new: BigInts for JavaScript, Accessibility Checks in DevTools, CSS Scroll Snapping and Marker Styling, access to cameras, microphones, and other media devices is no longer allowed in insecure contexts like plain HTTP. It’s now possible to resend a network request without editing the method, URL, parameters, and headers via DevTools, and a lot of (compatibility) fixes are included for CSS features as well.
  • Chrome 76 brings image support for the async clipboard API, making it easy to programmatically copy and paste image/png (currently, this is the only supported format though, unfortunately) images.
  • Tracking prevention is now available in Microsoft Edge preview, following other browsers like Safari and Firefox.


  • Have you heard of the concept of “good trouble”? Frank Chimero defines it as questioning and re-imagining the status quo, and having your actions stand in contrast to the norm. But the interview with the designer shows much more than a new concept, it’s challenging how we work today and how to do your own thing that doesn’t match the norm of the society.

Particularly, I like this quote here:

“Slow down, find a quiet place and create time for solitude so you can hear yourself. It’s so noisy out there.”

  • What if control is only an illusion? We would realize that the true nature of an experience is revealed only in the interplay with the people who use it and that an invalidated design is nothing but an opinion. Quite a thought that puts our assumptions and approach on projects into a different light.





Work & Life

  • Active Listening is a skill that helps us listening for meaning, and how the other person is feeling instead of that usual listening that focuses on ‘how can I reply or comment on this or how will we solve this?’. Buffer’s guide written by Marcus Wermuth is a great resource to learn and practice Active Listening.
  • Christoph Rumpel shares what he learned from self-publishing a book and shows interesting insights into finances of it and what to avoid or do better.
  • Ben Werdmüller on doing well while doing good: This is a personal story about struggling with revenue, investments, third-party capital, trying to earn money on your own by selling your product while having free competitors and how to still produce good things while doing financially well.
  • Shape Up — Stop Running in Circles and Ship Work that Matters is a new, free online book by Ryan Singer about project management, leading a company and product. It’s amazing and while I only had time to flick through it quickly and read some individual chapters and sections, this will definitely become a resource to save and refer to regularly.

Going Beyond…

I was in the cinema last week to watch a movie about some people who created a farm. The trailer was nice and while I wasn’t 100% convinced of it, it was an evening where I was up to go out to watch a movie. So I did and it was good that I went to see “The Biggest Little Farm”. The farmer made the film himself as he’s a wildlife filmmaker so expect some quite stunning pictures and sequences of wildlife animals in there!

The most revealing part was how much of an impact only a handful of people can make out of desert land in a few years of time, and how much we as humans can influence wildlife, give a habitat to insects, and produce quality food while including CO2 from the air into our soil to make plants grow better, in order to restore nature and make an impact in the effort to fight climate change.

At several points during the movie, I was close to tears and I was extremely thankful that I’m able to have my little garden space as well where I can do similar things (though way smaller than their farm). If you’re up for something new, to learn something about food, meat, economy and how it all connects or how to create a beautiful green space out of desert, this movie is for you.

Last but not least, solar panels are a good way to produce renewable energy and it’s good usage of roofs. In China though, air pollution is currently so bad that solar panels sometimes stop working. Another reason to act quickly! If solar panels don’t work due to missing sunrays, our bodies will suffer the same lack of sunlight and we need it for our health.

Smashing Editorial

Source: Smashing Magazine, Monthly Web Development Update 7/2019: Modern Techniques And Good Trouble

Collective #533

dreamt up by webguru in Uncategorized | Comments Off on Collective #533


Thorne: The Frontier Within

A case study of the wonderful The Frontier Within which is an immersive installation and web experience that captures the participant’s circulatory, respiratory and nervous system data, and transforms it into a living, breathing, interactive portrait of the body.

Read it


Reduced Motion Auto-Play Video

Scott O’Hara shows how to use the reduced motion media query to implement video components in a way that will respect user preferences from an OS level.

Read it


Is postMessage slow?

An article by Surma that investigates the performance of postMessage() and how to avoid blowing your budgets.

Read it


Repo Lovers

An online magazine made for developers, coders, hackers, makers, tinkers and everything else in between. Repo Lovers sets out to build an archive of interviews featuring talented folks building software.

Check it out


Chat Messaging UI Kit

A high fidelity Chat UI kit for Sketch. Chat interfaces for live chat, team collaboration, messaging, customer support and gaming are included.

Get it

Collective #533 was written by Pedro Botelho and published on Codrops.

Source: Codrops, Collective #533

Yarn Workspaces: Organize Your Project’s Codebase Like A Pro

dreamt up by webguru in Uncategorized | Comments Off on Yarn Workspaces: Organize Your Project’s Codebase Like A Pro

Yarn Workspaces: Organize Your Project’s Codebase Like A Pro

Yarn Workspaces: Organize Your Project’s Codebase Like A Pro

Jorge Ferreiro

Any time I start working on a new project, I ask myself, “Should I use separate git repositories for my back-end server and my front-end client(s)? What’s the best way to organize the codebase?”

I had this same question after a few months working on my personal website. I originally had all the code in the same repository: the back end used Node.js and the front end used ES6 with Pug. I adopted this organization for convenience, since having both projects in the same repo made it easy to search for functions and classes, and facilitated refactors. However, I found some downsides:

  • No independent deployments.
    Both apps were using the same package.json, and there was no clear separation on both projects.
  • Unclear boundaries.
    Since I rely on a global package.json, I didn’t have a mechanism to set specific versions for the back end and front end.
  • Shared utilities and code without versioning.

After some research, I found that Yarn workspaces was a great tool to solve those cons, and it was a helpful tool to create a monorepo project (more to come later!).

In this article, I share an intro to Yarn workspaces. We’ll run through a tutorial together on how to create your first project with it, and we’ll finish with a recap and next steps.

What Are Yarn Workspaces?

Yarn is a package manager by the folks at Facebook, and it has a great feature called Yarn workspaces. Yarn workspaces let you organize your project codebase using a monolithic repository (monorepo). The idea is that a single repository would contain multiple packages. Packages are isolated and could live independent of the larger project.

Yarn Workspaces

As an alternative, we could place all of these packages into separate repositories. Unfortunately, this approach affects the shareability, efficiency, and developer experience when developing on the packages and their dependent projects. Furthermore, when we work in a single repository we can move more swiftly and build more specific tooling to improve processes for the entire development life cycle.

Monorepo projects have been widely accepted by large companies like Google or Facebook, and they have proven monorepo can scale.

React is a good example of an open-source project that is monorepo. Also, React uses Yarn workspaces to achieve that purpose. In the next section we will learn how to create our first monorepo project with Yarn.

Creating A Monorepo Project With React And Express Using Yarn Workspaces In Six Steps

So far, we have learned what Yarn is, what a monorepo is, and why Yarn is a great tool to create a monorepo. Now let’s learn from scratch how to set up a new project using Yarn workspaces. To follow along, you’ll need a working environment with an up-to-date npm install. Download the source code.


To fully complete this tutorial, you will need to have Yarn installed on your machine. If you haven’t installed Yarn before, please follow these instructions.

These are the steps we’ll be following in this tutorial:

  1. Create Your Project And Root Workspace
  2. Create A React Project And Add It To The Workspace List
  3. Create An Express Project And Add It To The Workspace
  4. Install All The Dependencies And Say Hello To yarn.lock
  5. Using A Wildcard (*) To Import All Your Packages
  6. Add A Script To Run Both Packages

1. Create Your Project And Root Workspace

In your local machine terminal, create a new folder called example-monorepo:

$ mkdir example-monorepo

Inside the folder, create a new package.json with our root workspace.

$ cd example-monorepo
$ touch package.json

This package should be private in order to prevent accidentally publishing the root workspace. Add the following code to your new package.json file to make the package private:

   "private": true,
   "name": "example-monorepo",
   "workspaces": [],
   "scripts": {}

2. Create A React Project And Add It To The Workspace List

In this step, we will create a new React project and add it to the list of packages inside the root workspace.

First, let’s create a folder called packages where we will add the different projects we will create in the tutorial:

$ mkdir packages

Facebook has a command to create new React projects: create-react-app. We’ll use it to create a new React app with all the required configuration and scripts. We are creating this new project with the name “client” inside the packages folder that we created in step 1.

$ yarn create react-app packages/client

Once we have created our new React project, we need to tell Yarn to treat that project as a workspace. To do that, we simply need to add “client” (the name we used earlier) inside the “workspaces” list in the root package.json. Be sure to use the same name you used when running the create-react-app command.

   "private": true,
   "name": "example-monorepo",
   "workspaces": ["client"],
   "scripts": {}

3. Create An Express Project And Add It To The Workspace

Now it’s time to add a back-end app! We use express-generator to create an Express skeleton with all the required configuration and scripts.

Make sure you have express-generator installed on your computer. You can install it using Yarn with the following command:

$ yarn global add express-generator --prefix /usr/local

Using express-generator, we create a new Express app with the name “server” inside the packages folder.

$ express --view=pug packages/server

Finally, add the new package “server” into the workspaces list inside the root package.json.

   "private": true,
   "name": "example-monorepo",
   "workspaces": ["client", "server"],
   "scripts": {}

Note: This tutorial is simplified with only two packages (server and client). In a project, you might typically have as many packages as you need, and by convention the open-source community use this naming pattern: @your-project-name/package-name. For example: I use @ferreiro/server on my website.

4. Install All The Dependencies And Say Hello To yarn.lock

Once we have added our React app, as well as our Express server, we need to install all the dependencies. Yarn workspaces simplifies this process and we no longer need to go to every single application and install their dependencies manually. Instead, we execute one command — yarn install — and Yarn does the magic to install all the dependencies for every package, and optimize and cache them.

Run the following command:

$ yarn install

This command generates a yarn.lock file (similar to this example). It contains all the dependencies for your project, as well as the version numbers for each dependency. Yarn generates this file automatically, and you should not modify it.

5. Using A Wildcard (*) To Import All Your Packages

Until now, for every new package we have added, we were forced to also update the root package.json to include the new package to the workspaces:[] list.

We can avoid this manual step using a wildcard (*) that tells Yarn to include all the packages inside the packages folder.

Inside the root package.json, update the file content with the following line: "workspaces": ["packages/*"]

   "private": true,
   "name": "example-monorepo",
   "workspaces": ["packages/*"],
   "scripts": {}

6. Add A Script To Run Both Packages

Last step! We need to have a way to run both packages — the React client and the Express client — simultaneously. For this example, we will use concurrently. This package let us run multiple commands in parallel.

Add concurrently to the root package.json:

$ yarn add -W concurrently

Add three new scripts inside the root workspace package.json. Two scripts initialize the React and Express clients independently; the other one uses concurrently to run both scripts in parallel. See this code for reference.

   "private": true,
   "name": "example-monorepo",
   "workspaces": ["packages/*"],
   "scripts": {
       "client": "yarn workspace client start",
       "server": "yarn workspace server start",
       "start": "concurrently --kill-others-on-fail "yarn server"  "yarn client"

Note: We will not need to write our start scripts into the “server” and “client” packages because the tools we used to generate those packages (create-react-app and express-generator) already add those scripts for us. So we are good to go!

Finally, make sure you update the Express boot-up script to run the Express server on port 4000. Otherwise, the client and server will try to use the same port (3000).

Go to packages/server/bin/www and change the default port in line 15.

var port = normalizePort(process.env.PORT || '4000');

Now we are ready to run our packages!

$ yarn start

Where To Go From Here

Let’s recap what we’ve covered. First, we learned about Yarn workspaces and why it’s a great tool to create a monorepo project. Then, we created our first JavaScript monorepo project using Yarn, and we divided the logic of our app into multiple packages: client and server. Also, we created our first basic npm scripts and added the required dependencies for each app.

From this point, I’d suggest you review open-source projects in detail to see how they use Yarn workspaces to split the project logic into many packages. React is a good one.

Jorge Ferreiro’s website using yarn workspaces and packages with a back-end and frontend apps

Jorge Ferreiro’s website using yarn workspaces and packages with a back-end and frontend apps (Large preview)

Also, if you want to see a production website using this approach to separate back-end and front-end apps into independent packages, you can check the source of my website, that also includes a blog admin. When I migrated the codebase to use Yarn workspaces, I created a pull request with Kyle Wetch.

Moreover, I set up the infrastructure for a hackathon project that uses React, webpack, Node.js, and Yarn workspaces, and you can check the source code over here.

Finally, it would be really interesting for you to learn how to publish your independent packages to become familiar with the development life cycle. There are a couple of tutorials that are interesting to check: yarn publish or npm publish.

For any comments or questions, don’t hesitate to reach out to me on Twitter. Also, in the following months, I’ll publish more content about this in my blog, so you can subscribe there as well. Happy coding!

Smashing Editorial
(dm, og, il)

Source: Smashing Magazine, Yarn Workspaces: Organize Your Project’s Codebase Like A Pro

7 Gorgeous Free And Open-Source Typefaces And When To Use Them

dreamt up by webguru in Uncategorized | Comments Off on 7 Gorgeous Free And Open-Source Typefaces And When To Use Them

7 Gorgeous Free And Open-Source Typefaces And When To Use Them

7 Gorgeous Free And Open-Source Typefaces And When To Use Them

Noemi Stauffer

To facilitate your font picking and pairing process, I’ve included examples of how these typefaces have been put to use recently, as well as a few pairing ideas. Enjoy and don’t forget to ping me on Twitter to show me how you’ve used them — I’d love to see it!

Gangster Grotesk

Designed by Adrien Midzic, Gangster Grotesk is a contemporary grotesque with angled terminal strokes that slightly curve inward. Because of these quirky individual touches, the typeface brings a unique flavor to headlines and posters when used at large sizes. Its low contrast and slightly condensed width make it a good choice for body copy and other texts of small type sizes as well.

The font family comes in three weights, from Light to Bold, with stylistic alternates, including a loopy expressive ampersand. Gangster Grotesk is offered for free when signing up for the Fresh Fonts newsletter.

Preview of Gangster Grotesk

Gangster Grotesk. (Large preview)

Suggested Font Pairings

When applying Gangster Grotesk to titles, the quirky grotesque pairs well with low x-height serifs for text settings, such as FF Atma. Used at small point sizes instead, pairing Gangster Grotesk with Le Murmure (see below) offers the right mix of character and neutrality.

Use Case

Gangster Grotesk excels in punchy designs with strong colors when it’s asked to render strings of text, for example on this flyer designed by Tokyo-based Juri Okita for Pells Coffee.

An example of Gangster Grotesk in use

Gangster Grotesk in use. (Large preview)

Le Murmure

Preview of Le Murmure

Le Murmure. (Large preview)

Recently awarded a Certificate of Typographic Excellence by the Type Directors Club, Le Murmure was commissioned by French design agency Murmure to renew their brand image. Drawing inspiration from magazine titling fonts, Le Murmure is a condensed sans serif with an interesting mismatch between its characters, making it especially distinctive for use at large sizes. Its height and the singularity of its shapes provide elegance while conveying notions of experimentation and creativity. Le Murmure comes with many — even more — original alternate letters, and there is even a stylistic set that ‘randomizes’ all alternates for you (SS08).

Suggested Font Pairings

Used as a titling font, Le Murmure can pair well with a sans serif with warm curves, like Standard CT, or with a sans that has more pronounced irregularities, like Dinamo’s Prophet.

Use Case

Marrakesh’s Untitled Duo picked Le Murmure for the headlines of their studio’s website, complemented with Classic Sans for the navigation, which they also used at smaller size for the body text.

An example of Le Murmure in use

Le Murmure in use. View full website. (Large preview)

An example of Le Murmure in use

Le Murmure in use. View full website. (Large preview)


Preview of Reforma

Reforma. (Large preview)

Reforma is a bespoke typeface designed by PampaType for the Universidad Nacional de Córdoba in Argentina, an educational institution more than 400 years old. The typeface is composed of three subfamilies: Reforma 1918, a classic Serif, Reforma 2018, a modern Sans, and Reforma 1969, an intermediate hybrid that combines the qualities of the other two (subtle modulation and flare serifs). I find all three subfamilies to adapt well to a wide range of bodies, from display to immersive text, and each one of them comes in three weights, with matching italics.

Suggested Font Pairings

This typeface allows for interesting combinations among its different styles. The most obvious would be to use Reforma Sans for display use and to pair it with its serif counterpart for body text. However, I would encourage you to do something more original and try it the other way around, using the serif for titling.

Use Case

Graphic designer Étienne Pouvreau chose Reforma for the creation of the annual programs of the two Caf family centers of the Loir-et-Cher department in France. The booklets feature Reforma 1918 (the serif) for the headings, in italic, and Reforma 2018 (the sans) for the titles, subtitles and body text.

An example of Reforma in use

Reforma in use. View full booklet. (Large preview)

Space Grotesk

Preview of Space Grotesk

Space Grotesk. (Large preview)

The new foundry of Florian Karsten is behind this versatile, geometric sans serif. Derived from Space Mono, a monospaced typeface designed by Colophon Foundry for Google Fonts in 2016, Space Grotesk kept the nerdy charm of its predecessor and its particular retro-future voice. Available in five weights, Space Grotesk is well-suited for a wide range of uses, from body text to bold headlines. In addition, it comes with five sets of alternate letters, the third one (SS03) removing the connections between the diagonal strokes of uppercase A, M, N, V, W, and lowercase v, w, y  — which can be particularly effective to create distinctive headlines.

Suggested Font Pairings

As one would guess, Space Grotesk pairs well with Colophon Foundry’s Space Mono, the typeface it was derived from, which is also open source and free to use. Alternatively, if you’d like to pair it with a serif typeface, I would recommend one with pointy serifs and sharp details, such as Fortescue, or Wremena, which is also free to use (see below).

Use Case

Little & Big, a web design and development studio based in Sydney, Australia, chose Space Grotesk as the body text font for their website, including on blog entries, header and footer. They decided to pair it with Verona Serial, giving the website a professional, yet playful look and feel.

An example of Space Grotesk in use

Space Grotesk in use. View full website. (Large preview)

An example of Space Grotesk in use

Space Grotesk in use. View full website. (Large preview)


Preview of Syne

Syne. (Large preview)

Syne is a type family designed by Bonjour Monde for the visual identity of Synesthésie, an art center close to Paris. It consists of five distinct styles, amplifying the notion of structural differentiation within a font family: Syne Extra is a wide, heavy weight intended for use at large sizes, Syne Regular is a geometric sans with short ascenders and descenders (visible in the lowercase ‘g’ among others), complemented with a wider bold style, an italic in a handwritten style and a monospaced with a distorted look. Updated just days ago, Syne now comes with more alternates, a set of brand new accents, and a variable font version.

Suggested Font Pairings

The particularity of this typeface is that you can play with its different styles, and create fresh and atypical associations among them. For instance, Syne Extra does wonder for titles and headlines, and works well with Syne Regular for body copy.

Use Case

Syne was recently used by WeTransfer to present the results of their Ideas Report 2018, making extensive use of the typeface’s five different cuts on the website of the report and on its beautiful PDF companion.

An example of Syne in use

Syne in use. View full website. (Large preview)

An example of Syne in use

Syne in use. View full website. (Large preview)


Preview of VG5000

VG5000. (Large preview)

Named after the VG 5000, a computer created by Phillips in 1984, this typeface playfully combines pixelated and curved strokes, blurring the lines between old and new digital shapes. It also includes many early emojis and pictograms from the VG 5000’s original set, allowing you to create unexpected combinations. Moreover, the typeface comes with contemporary gender inclusive characters for the French language, replacing pronouns “il” and “elle” (“he” and “she”) with the neutral “iel”, and providing an alternative to gendered words by combining their masculine and feminine versions.

Suggested Font Pairings

Because of its pixelated details, and to remain truthful to its origins, I would suggest pairing VG5000 with a monospaced font, for example League Mono, which is also open source. Alternatively, you could decide to pair it with the sans or serif version of Input, or with a semi-proportional typeface like ETC Trispace, which is also free.

Use Case

French design studio Brand Brothers put VG5000 to use for the visual identity of Les Halles de la Cartoucherie, a new, versatile space dedicated to cultural, artistic and gastronomic activities in Toulouse. Paired with a custom, grid-based logo that represents the structure of the space, VG5000 was used both at large and small sizes on print materials, on the website of the space and even on its walls, using VG5000’s pixelated arrows for its wayfinding system.

An example of VG5000 in use

VG5000 in use. View full project. (Large preview)

An example of VG5000 in use

VG5000 in use. View full project. (Large preview)


Preview of Wremena

Wremena. (Large preview)

Wremena is a serif typeface designed by Roman Gornitsky and published by Moscow-based foundry Typefaces of The Temporary State. Its design was based on Vremena, a free typeface by the same designer, but it features more pronounced triangular serifs and sharper angles, which become even more visible in heavier weights. Wremena is available in three styles (Light, Regular and Bold) without italics but with support for the Latin and Cyrillic scripts. Because of its similarities with Times New Roman, Wremena can be used as a free, more contemporary alternative to the ever-popular typeface.

Suggested Font Pairings

A good tip to find pairings for a specific font is to browse the library of typefaces created by the same designer, as they often pair well together. This is especially true in this case, as Roman Gornitsky designed two sans serifs that are great matches for Wremena: Nowie Vremena, with its distinctive lowercase ‘g’, and more recently, Steinbeck, a lively font with intentional irregularities.

Use Case

The Jewish Museum of Moscow is behind Russian Spleen, a picturesque interactive web project that explores how Russian landscape painter Isaac Levitan impacted the 20th-century cinematography. Designer Pavel Kedich decided to typeset the website in Steinbeck (used at large sizes) and Wremena (here used for captions). Both fonts being multi-script, they allow the website to be available in English and Russian languages.

An example of Wremena in use

Wremena in use. View full website. (Large preview)

An example of Wremena in use

Wremena in use. View full website. (Large preview)

That’s It For Now!

While I’ve included great examples of how these free fonts can be used, there are many more on Typewolf and Fonts in Use. And if you’d like to discover new, high-quality free and open source fonts, make sure to subscribe to my newsletter Fresh Fonts.

All works and screenshots are property of their respective owners.

Smashing Editorial
(ah, yk, il)

Source: Smashing Magazine, 7 Gorgeous Free And Open-Source Typefaces And When To Use Them

The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

dreamt up by webguru in Uncategorized | Comments Off on The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

Daniel Ni

Web scraping is a way to grab data from websites without needing access to APIs or the website’s database. You only need access to the site’s data — as long as your browser can access the data, you will be able to scrape it.

Realistically, most of the time you could just go through a website manually and grab the data ‘by hand’ using copy and paste, but in a lot of cases that would take you many hours of manual work, which could end up costing you a lot more than the data is worth, especially if you’ve hired someone to do the task for you. Why hire someone to work at 1–2 minutes per query when you can get a program to perform a query automatically every few seconds?

For example, let’s say that you wish to compile a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. Using Google, you can see there are several sites that will list these movies by name, and maybe some additional information, but generally you’ll have to follow through with links to capture all the information you want.

Obviously, it would be impractical and time-consuming to go through every link from 1927 through to today and manually try to find the information through each page. With web scraping, we just need to find a website with pages that have all this information, and then point our program in the right direction with the right instructions.

In this tutorial, we will use Wikipedia as our website as it contains all the information we need and then use Scrapy on Python as a tool to scrape our information.

A few caveats before we begin:

Data scraping involves increasing the server load for the site that you’re scraping, which means a higher cost for the companies hosting the site and a lower quality experience for other users of that site. The quality of the server that is running the website, the amount of data you’re trying to obtain, and the rate at which you’re sending requests to the server will moderate the effect you have on the server. Keeping this in mind, we need to make sure that we stick to a few rules.

Most sites also have a file called robots.txt in their main directory. This file sets out rules for what directories sites do not want scrapers to access. A website’s Terms & Conditions page will usually let you know what their policy on data scraping is. For example, IMDB’s conditions page has the following clause:

Robots and Screen Scraping: You may not use data mining, robots, screen scraping, or similar data gathering and extraction tools on this site, except with our express-written consent as noted below.

Before we try to obtain a website’s data we should always check out the website’s terms and robots.txt to make sure we are obtaining legal data. When building our scrapers, we also need to make sure that we do not overwhelm a server with requests that it can’t handle.

Luckily, many websites recognize the need for users to obtain data, and they make the data available through APIs. If these are available, it’s usually a much easier experience to obtain data through the API than through scraping.

Wikipedia allows data scraping, as long as the bots aren’t going ‘way too fast’, as specified in their robots.txt. They also provide downloadable datasets so people can process the data on their own machines. If we go too fast, the servers will automatically block our IP, so we’ll implement timers in order to keep within their rules.

Getting Started, Installing Relevant Libraries Using Pip

First of all, to start off, let’s install Scrapy.


Install the latest version of Python from

Note: Windows users will also need Microsoft Visual C++ 14.0, which you can grab from “Microsoft Visual C++ Build Tools” over here.

You’ll also want to make sure you have the latest version of pip.

In cmd.exe, type in:

python -m pip install --upgrade pip

pip install pypiwin32

pip install scrapy

This will install Scrapy and all the dependencies automatically.


First you’ll want to install all the dependencies:

In Terminal, enter:

sudo apt-get install python3 python3-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev

Once that’s all installed, just type in:

pip install --upgrade pip

To make sure pip is updated, and then:

pip install scrapy

And it’s all done.


First you’ll need to make sure you have a c-compiler on your system. In Terminal, enter:

xcode-select --install

After that, install homebrew from

Update your PATH variable so that homebrew packages are used before system packages:

echo "export PATH=/usr/local/bin:/usr/local/sbin:$PATH" >> ~/.bashrc

source ~/.bashrc

Install Python:

brew install python

And then make sure everything is updated:

brew update; brew upgrade python

After that’s done, just install Scrapy using pip:

pip install Scrapy


Overview Of Scrapy, How The Pieces Fit Together, Parsers, Spiders, Etc

You will be writing a script called a ‘Spider’ for Scrapy to run, but don’t worry, Scrapy spiders aren’t scary at all despite their name. The only similarity Scrapy spiders and real spiders have are that they like to crawl on the web.

Inside the spider is a class that you define that tells Scrapy what to do. For example, where to start crawling, the types of requests it makes, how to follow links on pages, and how it parses data. You can even add custom functions to process data as well, before outputting back into a file.

Writing Your First Spider, Write A Simple Spider To Allow For Hands-on Learning

To start our first spider, we need to first create a Scrapy project. To do this, enter this into your command line:

scrapy startproject oscars

This will create a folder with your project.

We’ll start with a basic spider. The following code is to be entered into a python script. Open a new python script in /oscars/spiders and name it

We’ll import Scrapy.

import scrapy

We then start defining our Spider class. First, we set the name and then the domains that the spider is allowed to scrape. Finally, we tell the spider where to start scraping from.

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = [""]
   start_urls = ['']

Next, we need a function which will capture the information that we want. For now, we’ll just grab the page title. We use CSS to find the tag which carries the title text, and then we extract it. Finally, we return the information back to Scrapy to be logged or written to a file.

def parse(self, response):
   data = {}
   data['title'] = response.css('title::text').extract()
   yield data

Now save the code in /oscars/spiders/

To run this spider, simply go to your command line and type:

scrapy crawl oscars

You should see an output like this:

2019-05-02 14:39:31 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: oscars)
2019-05-02 14:39:32 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: None)
2019-05-02 14:39:34 [scrapy.core.engine] DEBUG: Crawled (200)  (referer: None)
2019-05-02 14:39:34 [scrapy.core.scraper] DEBUG: Scraped from 
{'title': ['Academy Award for Best Picture - Wikipedia']}
2019-05-02 14:39:34 [scrapy.core.engine] INFO: Closing spider (finished)
2019-05-02 14:39:34 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 589,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 74517,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 5, 2, 7, 39, 34, 264319),
 'item_scraped_count': 1,
 'log_count/DEBUG': 3,
 'log_count/INFO': 9,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2019, 5, 2, 7, 39, 31, 431535)}
2019-05-02 14:39:34 [scrapy.core.engine] INFO: Spider closed (finished)

Congratulations, you’ve built your first basic Scrapy scraper!

Full code:

import scrapy

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = [""]
   start_urls = [""]

   def parse(self, response):
       data = {}
       data['title'] = response.css('title::text').extract()
       yield data

Obviously, we want it to do a little bit more, so let’s look into how to use Scrapy to parse data.

First, let’s get familiar with the Scrapy shell. The Scrapy shell can help you test your code to make sure that Scrapy is grabbing the data you want.

To access the shell, enter this into your command line:

scrapy shell “”

This will basically open the page that you’ve directed it to and it will let you run single lines of code. For example, you can view the raw HTML of the page by typing in:


Or open the page in your default browser by typing in:


Our goal here is to find the code that contains the information that we want. For now, let’s try to grab the movie title names only.

The easiest way to find the code we need is by opening the page in our browser and inspecting the code. In this example, I am using Chrome DevTools. Just right-click on any movie title and select ‘inspect’:

Using Chrome DevTools to inspect HTML and CSS

Chrome DevTools window. (Large preview)

As you can see, the Oscar winners have a yellow background while the nominees have a plain background. There’s also a link to the article about the movie title, and the links for movies end in film). Now that we know this, we can use a CSS selector to grab the data. In the Scrapy shell, type in:

response.css(r"tr[style='background:#FAEB86'] a[href*='film)']").extract()

As you can see, you now have a list of all the Oscar Best Picture Winners!

> response.css(r"tr[style='background:#FAEB86'] a[href*='film']").extract()
['<a href="/wiki/Wings_(1927_film)" title="Wings (1927 film)">Wings</a>', 
 '<a href="/wiki/Green_Book_(film)" title="Green Book (film)">Green Book</a>', '<a href="/wiki/Jim_Burke_(film_producer)" title="Jim Burke (film producer)">Jim Burke</a>']

Going back to our main goal, we want a list of the Oscar winners for best picture, along with their director, starring actors, release date, and run time. To do this, we need Scrapy to grab data from each of those movie pages.

We’ll have to rewrite a few things and add a new function, but don’t worry, it’s pretty straightforward.

We’ll start by initiating the scraper the same way as before.

import scrapy, time

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = [""]
   start_urls = [""]

But this time, two things will change. First, we’ll import time along with scrapy because we want to create a timer to restrict how fast the bot scrapes. Also, when we parse the pages the first time, we want to only get a list of the links to each title, so we can grab information off those pages instead.

def parse(self, response):
   for href in response.css(r"tr[style='background:#FAEB86'] a[href*='film)']::attr(href)").extract():
       url = response.urljoin(href)
       req = scrapy.Request(url, callback=self.parse_titles)
       yield req

Here we make a loop to look for every link on the page that ends in film) with the yellow background in it and then we join those links together into a list of URLs, which we will send to the function parse_titles to pass further. We also slip in a timer for it to only request pages every 5 seconds. Remember, we can use the Scrapy shell to test our response.css fields to make sure we’re getting the correct data!

def parse_titles(self, response):
   for sel in response.css('html').extract():
       data = {}
       data['title'] = response.css(r"h1[id='firstHeading'] i::text").extract()
       data['director'] = response.css(r"tr:contains('Directed by') a[href*='/wiki/']::text").extract()
       data['starring'] = response.css(r"tr:contains('Starring') a[href*='/wiki/']::text").extract()
       data['releasedate'] = response.css(r"tr:contains('Release date') li::text").extract()
       data['runtime'] = response.css(r"tr:contains('Running time') td::text").extract()
   yield data

The real work gets done in our parse_data function, where we create a dictionary called data and then fill each key with the information we want. Again, all these selectors were found using Chrome DevTools as demonstrated before and then tested with the Scrapy shell.

The final line returns the data dictionary back to Scrapy to store.

Complete code:

import scrapy, time

class OscarsSpider(scrapy.Spider):
   name = "oscars"
   allowed_domains = [""]
   start_urls = [""]

   def parse(self, response):
       for href in response.css(r"tr[style='background:#FAEB86'] a[href*='film)']::attr(href)").extract():
           url = response.urljoin(href)
           req = scrapy.Request(url, callback=self.parse_titles)
           yield req

   def parse_titles(self, response):
       for sel in response.css('html').extract():
           data = {}
           data['title'] = response.css(r"h1[id='firstHeading'] i::text").extract()
           data['director'] = response.css(r"tr:contains('Directed by') a[href*='/wiki/']::text").extract()
           data['starring'] = response.css(r"tr:contains('Starring') a[href*='/wiki/']::text").extract()
           data['releasedate'] = response.css(r"tr:contains('Release date') li::text").extract()
           data['runtime'] = response.css(r"tr:contains('Running time') td::text").extract()
       yield data

Sometimes we will want to use proxies as websites will try to block our attempts at scraping.

To do this, we only need to change a few things. Using our example, in our def parse(), we need to change it to the following:

def parse(self, response):
   for href in (r"tr[style='background:#FAEB86'] a[href*='film)']::attr(href)").extract()
       url = response.urljoin(href)
       req = scrapy.Request(url, callback=self.parse_titles)
       req.meta['proxy'] = ""
       yield req

This will route the requests through your proxy server.

Deployment And Logging, Show How To Actually Manage A Spider In Production

Now it is time to run our spider. To make Scrapy start scraping and then output to a CSV file, enter the following into your command prompt:

scrapy crawl oscars -o oscars.csv

You will see a large output, and after a couple of minutes, it will complete and you will have a CSV file sitting in your project folder.

Compiling Results, Show How To Use The Results Compiled In The Previous Steps

When you open the CSV file, you will see all the information we wanted (sorted out by columns with headings). It’s really that simple.

A CSV of Oscar winning movies and associated information

Oscar winning movies list and information. (Large preview)

With data scraping, we can obtain almost any custom dataset that we want, as long as the information is publicly available. What you want to do with this data is up to you. This skill is extremely useful for doing market research, keeping information on a website updated, and many other things.

It’s fairly easy to set up your own web scraper to obtain custom datasets on your own, however, always remember that there might be other ways to obtain the data that you need. Businesses invest a lot into providing the data that you want, so it’s only fair that we respect their terms and conditions.

Additional Resources For Learning More About Scrapy And Web Scraping In General

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, The Ultimate Guide To Building Scalable Web Scrapers With Scrapy

Collective #532

dreamt up by webguru in Uncategorized | Comments Off on Collective #532



A great project that simulates depth of field with particles on a shader. Also, check out this demo of a blurry cat.

Check it out



BaseDash lets you manage and visualize your databases with a collaborative, cloud-based tool.

Check it out

Collective #532 was written by Pedro Botelho and published on Codrops.

Source: Codrops, Collective #532

Everything You Need To Know About CSS Margins

dreamt up by webguru in Uncategorized | Comments Off on Everything You Need To Know About CSS Margins

Everything You Need To Know About CSS Margins

Everything You Need To Know About CSS Margins

Rachel Andrew

One of the first things most of us learned when we learned CSS, was details of the various parts of a box in CSS, described as The CSS Box Model. One of the elements in the Box Model is the margin, a transparent area around a box, which will push other elements away from the box contents. The margin-top, margin-right, margin-bottom and margin-left properties were described right back in CSS1, along with the shorthand margin for setting all four properties at once.

A margin seems to be a fairly uncomplicated thing, however, in this article, we will take a look at some of the things which trip people up with regard to using margins. In particular, we will be looking at how margins interact with each other, and how margin collapsing actually works.

The CSS Box Model

As with all articles about parts of the CSS Box Model, we should define what we mean by that, and how the model has been clarified through versions of CSS. The Box Model refers to how the various parts of a box — the content, padding, border, and margin — are laid out and interact with each other. In CSS1, the Box Model was detailed with the ASCII art diagram shown in the image below.

ascii art drawing of the box model

Depiction of the CSS Box Model in CSS1

The four margin properties for each side of the box and the margin shorthand were all defined in CSS1.

The CSS2.1 specification has an illustration to demonstrate the Box Model and also defines terms we still use to describe the various boxes. The specification describes the content box, padding box, border box, and margin box, each being defined by the edges of the content, padding, border, and margin respectively.

diagram of the CSS Box Model

Depection of the CSS Box Model in CSS2

There is now a Level 3 Box Model specification as a Working Draft. This specification refers back to CSS2 for the definitions of the Box Model and margins, therefore it is the CSS2 definition we will be using for the majority of this article.

Margin Collapsing

The CSS1 specification, as it defined margins, also defined that vertical margins collapse. This collapsing behavior has been the source of margin-related frustration ever since. Margin collapsing makes sense if you consider that in those early days, CSS was being used as a documenting formatting language. Margin collapsing means that when a heading with a bottom margin, is followed by a paragraph with a top margin, you do not get a huge gap between those items.

When margins collapse, they will combine so that the space between the two elements becomes the larger of the two margins. The smaller margin essentially ending up inside the larger one.

Margins collapse in the following situations:

Let’s take a look at each of these scenarios in turn, before looking at the things which prevent margins from collapsing in these scenarios.

Adjacent Siblings

My initial description of margin collapsing is a demonstration of how the margins between adjacent siblings collapse. Other than in the situations mentioned below, if you have two elements displaying one after the other in normal flow, the bottom margin of the first element will collapse with the top margin of the following element.

In the CodePen example below, there are three div elements. The first has a top and bottom margin of 50 pixels. The second has a top and bottom margin of 20px. The third has a top and bottom margin of 3em. The margin between the first two elements is 50 pixels, as the smaller top margin is combined with the larger bottom margin. The margin between the second two elements in 3em, as 3em is larger than the 20 pixels on the bottom of the second element.

See the Pen [Margins: adjacent siblings]( by Rachel Andrew.

See the Pen Margins: adjacent siblings by Rachel Andrew.

Completely Empty Boxes

If a box is empty, then it’s top and bottom margin may collapse with each other. In the following CodePen example, the element with a class of empty has a top and bottom margin of 50 pixels, however, the space between the first and third items is not 100 pixels, but 50. This is due to the two margins collapsing. Adding anything to that box (even padding) will cause the top and bottom margins to be used and not collapse.

See the Pen [Margins: empty boxes]( by Rachel Andrew.

See the Pen Margins: empty boxes by Rachel Andrew.

Parent And First Or Last Child Element

This is the margin collapsing scenario which catches people out most often, as it does not seem particularly intuitive. In the following CodePen, I have a div with a class of wrapper, and I have given that div an outline in red so that you can see where it is. The three child elements all have a margin of 50 pixels. However, the first and last items are flush with the edges of the wrapper; there is not a 50-pixel margin between the element and the wrapper.

See the Pen [Margins: margin on first and last child]( by Rachel Andrew.

See the Pen Margins: margin on first and last child by Rachel Andrew.

This is because the margin on the child collapses with any margin on the parent thus ending up on the outside of the parent. You can see this if you inspect the first child using DevTools. The highlighted yellow area is the margin.

The item with a yellow highlighted margin showing outside the parent

DepvTools can help you see where your margin ends up

Only Block Margins Collapse

The last example also highlights something about margin collapsing. In CSS2, only vertical margins are specified to collapse — that is the top and bottom margins on an element if you are in a horizontal writing mode. So the left and right margins above are not collapsing and ending up outside the wrapper.

Note: It is worth remembering that margins only collapse in the block direction, such as between paragraphs.

Things Which Prevent Margin Collapsing

Margins never collapse if an item has absolute positioning, or is floated. However, assuming you have run into one of the places where margins collapse outlined above, how can you stop those margins collapsing?

The first thing that stops collapsing is situations where there is something between the elements in question.

For example, a box completely empty of content will not collapse it’s top and bottom margin if it has a border, or padding applied. In the example below I have added 1px of padding to the box. There is now a 50-pixel margin above and below the box.

See the Pen [Margins: empty boxes with padding do not collapse]( by Rachel Andrew.

See the Pen Margins: empty boxes with padding do not collapse by Rachel Andrew.

This has logic behind it, if the box is completely empty with no border or padding, it is essentially invisible. It might be an empty paragraph element thrown into the markup by your CMS. If your CMS was adding redundant paragraph elements, you probably wouldn’t want them to cause large gaps between the other paragraphs due to their margins being honored. Add anything to the box, and you will get those gaps.

Similar behavior can be seen with margins on first or last children which collapse through the parent. If we add a border to the parent, the margins on the children stay inside.

See the Pen [Margins: margin on first and last child doesn’t collapse if the parent has a border]( by Rachel Andrew.

See the Pen Margins: margin on first and last child doesn’t collapse if the parent has a border by Rachel Andrew.

Once again, there is some logic to the behavior. If you have wrapping elements for semantic purposes that do not display visually, you probably don’t want them to introduce big gaps in the display. This made a lot of sense when the web was mostly text. It is less useful as behavior when we are using elements to lay out a design.

Creating a Block Formatting Context

A new Block Formatting Context (BFC) will also prevent margin collapsing through the containing element. If we look again at the example of the first and last child, ending up with their margins outside of the wrapper, and give the wrapper display: flow-root, thus creating a new BFC, the margins stay inside.

See the Pen [Margins: a new Block Formatting Context contains margins]( by Rachel Andrew.

See the Pen Margins: a new Block Formatting Context contains margins by Rachel Andrew.

To find out more about display: flow-root, read my article “Understanding CSS Layout And The Block Formatting Context”. Changing the value of the overflow property to auto will have the same effect, as this also creates a new BFC, although it may also create scrollbars that you didn’t want in some scenarios.

Flex And Grid Containers

Flex and Grid containers establish Flex and Grid formatting contexts for their children, so they have different behavior to block layout. One of those differences is that margins do not collapse:

“A flex container establishes a new flex formatting context for its contents. This is the same as establishing a block formatting context, except that flex layout is used instead of block layout. For example, floats do not intrude into the flex container, and the flex container’s margins do not collapse with the margins of its contents.”

Flexbox Level 1

If we take the example above and make the wrapper into a flex container, displaying the items with flex=direction: column, you can see that the margins are now contained by the wrapper. Additionally, margins between adjacent flex items do not collapse with each other, so we end up with 100 pixels between flex items, the total of the 50 pixels on the top and bottom of the items.

See the Pen [Margins: margins on flex items do not collapse]( by Rachel Andrew.

See the Pen Margins: margins on flex items do not collapse by Rachel Andrew.

Margin Strategies For Your Site

Due to margin collapsing, it is a good idea to come up with a consistent way of dealing with margins in your site. The simplest thing to do is to only define margins on the top or bottom of elements. In that way, you should not run into margin collapsing issues too often as the side with a margin will always be adjacent to a side without a margin.

Note: Harry Roberts has an excellent post detailing the reasons why setting margins only in one direction is a good idea, and not just due to solving collapsing margin issues.

This solution doesn’t solve the issues you might run into with margins on children collapsing through their parent. That particular issue tends to be less common, and knowing why it is happening can help you come up with a solution. An ideal solution to that is to give components which require it display: flow-root, as a fallback for older browsers you could use overflow to create a BFC, turn the parent into a flex container, or even introduce a single pixel of padding. Don’t forget that you can use feature queries to detect support for display: flow-root so only old browsers get a less optimal fix.

Most of the time, I find that knowing why margins collapse (or didn’t) is the key thing. You can then figure out on a case-by-case basis how to deal with it. Whatever you choose, make sure to share that information with your team. Quite often margin collapsing is a bit mysterious, so the reason for doing things to counter it may be non-obvious! A comment in your code goes a long way to help — you could even link to this article and help to share the margin collapsing knowledge.

I thought that I would round up this article with a few other margin-related pieces of information.

Percentage Margins

When you use a percentage in CSS, it has to be a percentage of something. Margins (and padding) set using percentages will always be a percentage of the inline size (width in a horizontal writing mode) of the parent. This means that you will have equal-sized padding all the way around the element when using percentages.

In the CodePen example below, I have a wrapper which is 200 pixels wide, inside is a box which has a 10% margin, the margin is 20 pixels on all sides, that being 10% of 200.

See the Pen [Margins: percentage margins]( by Rachel Andrew.

See the Pen Margins: percentage margins by Rachel Andrew.

Margins In A Flow-Relative World

We have been talking about vertical margins throughout this article, however, modern CSS tends to think about things in a flow relative rather than a physical way. Therefore, when we talk about vertical margins, we really are talking about margins in the block dimension. Those margins will be top and bottom if we are in a horizontal writing mode, but would be right and left in a vertical writing mode written left to right.

Once working with logical, flow relative directions it becomes easier to talk about block start and block end, rather than top and bottom. To make this easier, CSS has introduced the Logical Properties and Values specification. This maps flow relative properties onto the physical ones.

For margins, this gives us the following mappings (if we are working in English or any other horizontal writing mode with a left-to-right text direction).

  • margin-top = margin-block-start
  • margin-right = margin-inline-end
  • margin-bottom = margin-block-end
  • margin-left = margin-inline-start

We also have two new shorthands which allow for the setting of both blocks at once or both inline.

  • margin-block
  • margin-inline

In the next CodePen example, I have used these flow relative keywords and then changed the writing mode of the box, you can see how the margins follow the text direction rather than being tied to physical top, right, bottom, and left.

See the Pen [Margins: flow relative margins]( by Rachel Andrew.

See the Pen Margins: flow relative margins by Rachel Andrew.

You can read more about logical properties and values on MDN or in my article “Understanding Logical Properties And Values” here on Smashing Magazine.

To Wrap-Up

You now know most of what there is to know about margins! In short:

  • Margin collapsing is a thing. Understanding why it happens and when it doesn’t will help you solve any problems it may cause.
  • Setting margins in one direction only solves many margin related headaches.
  • As with anything in CSS, share with your team the decisions you make, and comment your code.
  • Thinking about block and inline dimensions rather than the physical top, right, bottom and left will help you as the web moves towards being writing mode agnostic.
Smashing Editorial

Source: Smashing Magazine, Everything You Need To Know About CSS Margins