Collective #542

Inspirational Website of the Week: Igor Mahr A web design full of lively motion with a truly unique feel. Our pick this week. Get inspired This content is sponsored via Syndicate Ads Easily Secure and Manage All Your Apple Devices in Minutes Jamf Now Read more

Faster Image Loading With Embedded Image Previews

dreamt up by webguru in Uncategorized | Comments Off on Faster Image Loading With Embedded Image Previews

Faster Image Loading With Embedded Image Previews

Faster Image Loading With Embedded Image Previews

Christoph Erdmann



Low Quality Image Preview (LQIP) and the SVG-based variant SQIP are the two predominant techniques for lazy image loading. What both have in common is that you first generate a low-quality preview image. This will be displayed blurred and later replaced by the original image. What if you could present a preview image to the website visitor without having to load additional data?

JPEG files, for which lazy loading is mostly used, have the possibility, according to the specification, to store the data contained in them in such a way that first the coarse and then the detailed image contents are displayed. Instead of having the image built up from top to bottom during loading (baseline mode), a blurred image can be displayed very quickly, which gradually becomes sharper and sharper (progressive mode).

Representation of the temporal structure of a JPEG in baseline mode

Baseline mode (Large preview)

Representation of the temporal structure of a JPEG in progressive mode

Progressive mode (Large preview)

In addition to the better user experience provided by the appearance that is displayed more quickly, progressive JPEGs are usually also smaller than their baseline-encoded counterparts. For files larger than 10 kB, there is a 94 percent probability of a smaller image when using progressive mode according to Stoyan Stefanov of the Yahoo development team.

If your website consists of many JPEGs, you will notice that even progressive JPEGs load one after the other. This is because modern browsers only allow six simultaneous connections to a domain. Progressive JPEGs alone are therefore not the solution to give the user the fastest possible impression of the page. In the worst case, the browser will load an image completely before it starts loading the next one.

The idea presented here is now to load only so many bytes of a progressive JPEG from the server that you can quickly get an impression of the image content. Later, at a time defined by us (e.g. when all preview images in the current viewport have been loaded), the rest of the image should be loaded without requesting the part already requested for the preview again.

Shows the way the EIP (Embedded image preview) technique loads the image data in two requests.

Loading a progressive JPEG with two requests (Large preview)

Unfortunately, you can’t tell an img tag in an attribute how much of the image should be loaded at what time. With Ajax, however, this is possible, provided that the server delivering the image supports HTTP Range Requests.

Using HTTP range requests, a client can inform the server in an HTTP request header which bytes of the requested file are to be contained in the HTTP response. This feature, supported by each of the larger servers (Apache, IIS, nginx), is mainly used for video playback. If a user jumps to the end of a video, it would not be very efficient to load the complete video before the user can finally see the desired part. Therefore, only the video data around the time requested by the user is requested by the server, so that the user can watch the video as fast as possible.

We now face the following three challenges:

  1. Creating The Progressive JPEG
  2. Determine Byte Offset Up To Which The First HTTP Range Request Must Load The Preview Image
  3. Creating the Frontend JavaScript Code

1. Creating The Progressive JPEG

A progressive JPEG consists of several so-called scan segments, each of which contains a part of the final image. The first scan shows the image only very roughly, while the ones that follow later in the file add more and more detailed information to the already loaded data and finally form the final appearance.

How exactly the individual scans look is determined by the program that generates the JPEGs. In command-line programs like cjpeg from the mozjpeg project, you can even define which data these scans contain. However, this requires more in-depth knowledge, which would go beyond the scope of this article. For this, I would like to refer to my article “Finally Understanding JPG“, which teaches the basics of JPEG compression. The exact parameters that have to be passed to the program in a scan script are explained in the wizard.txt of the mozjpeg project. In my opinion, the parameters of the scan script (seven scans) used by mozjpeg by default are a good compromise between fast progressive structure and file size and can, therefore, be adopted.

To transform our initial JPEG into a progressive JPEG, we use jpegtran from the mozjpeg project. This is a tool to make lossless changes to an existing JPEG. Pre-compiled builds for Windows and Linux are available here: https://mozjpeg.codelove.de/binaries.html. If you prefer to play it safe for security reasons, it’s better to build them yourself.

From the command line we now create our progressive JPEG:

$ jpegtran input.jpg > progressive.jpg

The fact that we want to build a progressive JPEG is assumed by jpegtran and does not need to be explicitly specified. The image data will not be changed in any way. Only the arrangement of the image data within the file is changed.

Metadata irrelevant to the appearance of the image (such as Exif, IPTC or XMP data), should ideally be removed from the JPEG since the corresponding segments can only be read by metadata decoders if they precede the image content. Since we cannot move them behind the image data in the file for this reason, they would already be delivered with the preview image and enlarge the first request accordingly. With the command-line program exiftool you can easily remove these metadata:

$ exiftool -all= progressive.jpg

If you don’t want to use a command-line tool, you can also use the online compression service compress-or-die.com to generate a progressive JPEG without metadata.

2. Determine Byte Offset Up To Which The First HTTP Range Request Must Load The Preview Image

A JPEG file is divided into different segments, each containing different components (image data, metadata such as IPTC, Exif and XMP, embedded color profiles, quantization tables, etc.). Each of these segments begins with a marker introduced by a hexadecimal FF byte. This is followed by a byte indicating the type of segment. For example, D8 completes the marker to the SOI marker FF D8 (Start Of Image), with which each JPEG file begins.

Each start of a scan is marked by the SOS marker (Start Of Scan, hexadecimal FF DA). Since the data behind the SOS marker is entropy coded (JPEGs use the Huffman coding), there is another segment with the Huffman tables (DHT, hexadecimal FF C4) required for decoding before the SOS segment. The area of interest for us within a progressive JPEG file, therefore, consists of alternating Huffman tables/scan data segments. Thus, if we want to display the first very rough scan of an image, we have to request all bytes up to the second occurrence of a DHT segment (hexadecimal FF C4) from the server.

Shows the SOS markers in a JPEG file

Structure of a JPEG file (Large preview)

In PHP, we can use the following code to read the number of bytes required for all scans into an array:

<?php
$img = "progressive.jpg";
$jpgdata = file_get_contents($img);
$positions = [];
$offset = 0;
while ($pos = strpos($jpgdata, "xFFxC4", $offset)) {
    $positions[] = $pos+2;
    $offset = $pos+2;
}

We have to add the value of two to the found position because the browser only renders the last row of the preview image when it encounters a new marker (which consists of two bytes as just mentioned).

Since we are interested in the first preview image in this example, we find the correct position in $positions[1] up to which we have to request the file via HTTP Range Request. To request an image with a better resolution, we could use a later position in the array, e.g. $positions[3].

3. Creating The Frontend JavaScript Code

First of all, we define an img tag, to which we give the just evaluated byte position:

<img data-src="progressive.jpg" data-bytes="<?= $positions[1] ?>">

As is often the case with lazy load libraries, we do not define the src attribute directly so that the browser does not immediately start requesting the image from the server when parsing the HTML code.

With the following JavaScript code we now load the preview image:

var $img = document.querySelector("img[data-src]");
var URL = window.URL || window.webkitURL;

var xhr = new XMLHttpRequest();
xhr.onload = function(){
    if (this.status === 206){
        $img.src_part = this.response;
        $img.src = URL.createObjectURL(this.response);
    }
}

xhr.open('GET', $img.getAttribute('data-src'));
xhr.setRequestHeader("Range", "bytes=0-" + $img.getAttribute('data-bytes'));
xhr.responseType = 'blob';
xhr.send();

This code creates an Ajax request that tells the server in an HTTP range header to return the file from the beginning to the position specified in data-bytes… and no more. If the server understands HTTP Range Requests, it returns the binary image data in an HTTP-206 response (HTTP 206 = Partial Content) in the form of a blob, from which we can generate a browser-internal URL using createObjectURL. We use this URL as src for our img tag. Thus we have loaded our preview image.

We store the blob additionally at the DOM object in the property src_part, because we will need this data immediately.

In the network tab of the developer console you can check that we have not loaded the complete image, but only a small part. In addition, the loading of the blob URL should be displayed with a size of 0 bytes.

Shows the network console and the sizes of the HTTP requests

Network console when loading the preview image (Large preview)

Since we already load the JPEG header of the original file, the preview image has the correct size. Thus, depending on the application, we can omit the height and width of the img tag.

Alternative: Loading the preview image inline

For performance reasons, it is also possible to transfer the data of the preview image as data URI directly in the HTML source code. This saves us the overhead of transferring the HTTP headers, but the base64 encoding makes the image data one third larger. This is relativized if you deliver the HTML code with a content encoding like gzip or brotli, but you should still use data URIs for small preview images.

Much more important is the fact that the preview images are available immediately and there is no noticeable delay for the user when building the page.

First of all, we have to create the data URI, which we then use in the img tag as src. For this, we create the data URI via PHP, whereby this code is based on the code just created, which determines the byte offsets of the SOS markers:

<?php
…

$fp = fopen($img, 'r');
$data_uri = 'data:image/jpeg;base64,'. base64_encode(fread($fp, $positions[1]));
fclose($fp);

The created data URI is now directly inserted into the `img` tag as src:

<img src="<?= $data_uri ?>" data-src="progressive.jpg" alt="">

Of course, the JavaScript code must also be adapted:


var $img = document.querySelector("img[data-src]");

var binary = atob($img.src.slice(23));
var n = binary.length;
var view = new Uint8Array(n);
while(n--) { view[n] = binary.charCodeAt(n); }

$img.src_part = new Blob([view], { type: 'image/jpeg' });
$img.setAttribute('data-bytes', $img.src_part.size - 1);

Instead of requesting the data via Ajax request, where we would immediately receive a blob, in this case we have to create the blob ourselves from the data URI. To do this, we free the data-URI from the part that does not contain image data: data:image/jpeg;base64. We decode the remaining base64 coded data with the atob command. In order to create a blob from the now binary string data, we have to transfer the data into a Uint8 array, which ensures that the data is not treated as a UTF-8 encoded text. From this array, we can now create a binary blob with the image data of the preview image.

So that we don’t have to adapt the following code for this inline version, we add the attribute data-bytes on the img tag, which in the previous example contains the byte offset from which the second part of the image has to be loaded.

In the network tab of the developer console, you can also check here that loading the preview image does not generate an additional request, while the file size of the HTML page has increased.

Shows the network console and the sizes of the HTTP requests

Network console when loading the preview image as data URI (Large preview)

Loading the final image

In a second step we load the rest of the image file after two seconds as an example:

setTimeout(function(){
    var xhr = new XMLHttpRequest();
    xhr.onload = function(){
        if (this.status === 206){
            var blob = new Blob([$img.src_part, this.response], { type: 'image/jpeg'} );
            $img.src = URL.createObjectURL(blob);
        }
    }
    xhr.open('GET', $img.getAttribute('data-src'));
    xhr.setRequestHeader("Range", "bytes="+ (parseInt($img.getAttribute('data-bytes'), 10)+1) +'-');
    xhr.responseType = 'blob';
    xhr.send();
}, 2000);

In the Range header this time we specify that we want to request the image from the end position of the preview image to the end of the file. The answer to the first request is stored in the property src_part of the DOM object. We use the responses from both requests to create a new blob per new Blob(), which contains the data of the whole image. The blob URL generated from this is used again as src of the DOM object. Now the image is completely loaded.

Also now we can check the loaded sizes in the network tab of the developer console again..

Shows the network console and the sizes of the HTTP requests

Network console when loading the entire image (31.7 kB) (Large preview)

Prototype

At the following URL I have provided a prototype where you can experiment with different parameters: http://embedded-image-preview.cerdmann.com/prototype/

The GitHub repository for the prototype can be found here: https://github.com/McSodbrenner/embedded-image-preview

Considerations At The End

Using the Embedded Image Preview (EIP) technology presented here, we can load qualitatively different preview images from progressive JPEGs with the help of Ajax and HTTP Range Requests. The data from these preview images is not discarded but instead reused to display the entire image.

Furthermore, no preview images need to be created. On the server-side, only the byte offset at which the preview image ends has to be determined and saved. In a CMS system, it should be possible to save this number as an attribute on an image and take it into account when outputting it in the img tag. Even a workflow would be conceivable, which supplements the file name of the picture by the offset, e.g. progressive-8343.jpg, in order not to have to save the offset apart from the picture file. This offset could be extracted by the JavaScript code.

Since the preview image data is reused, this technique could be a better alternative to the usual approach of loading a preview image and then a WebP (and providing a JPEG fallback for non-WebP-supporting browsers). The preview image often destroys the storage advantages of the WebP, which does not support progressive mode.

Currently, preview images in normal LQIP are of inferior quality, since it is assumed that loading the preview data requires additional bandwidth. As Robin Osborne already made clear in a blog post in 2018, it doesn’t make much sense to show placeholders that don’t give you an idea of the final image. By using the technique suggested here, we can show some more of the final image as a preview image without hesitation by presenting the user a later scan of the progressive JPEG.

In case of a weak network connection of the user, it might make sense, depending on the application, not to load the whole JPEG, but e.g. to omit the last two scans. This produces a much smaller JPEG with an only slightly reduced quality. The user will thank us for it, and we don’t have to store an additional file on the server.

Now I wish you a lot of fun trying out the prototype and look forward to your comments.

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, Faster Image Loading With Embedded Image Previews

Collective #542

dreamt up by webguru in Uncategorized | Comments Off on Collective #542






C542_Aristide

The updated portfolio by Aristide Benoist is a true WebGL powered gem.

Check it out


C542_nuxtpress

NuxtPress

NuxtPress is a microframework that will automatically register routes, templates and configuration options in your Nuxt application. It also takes care of loading Markdown files and making pre-rendered static files automatically available.

Check it out







C542_auth

useAuth

UseAuth offers a simple way to add authentication to your React app. It handles users, login forms and redirects.

Check it out



C542_clamp

line-clamp

This CSS-Tricks Almanac entry by Geoff Graham is about line-clamp, a very practical property that truncates text at a specific number of lines.

Read it


C542_gopablo

GoPablo

GoPablo is a static site generator with a modern development workflow, integrated web server, auto-reload, CSS preprocessors, and ES6 ready.

Check it out








Collective #542 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #542

Testing Made Easier Via Framework Minimalism And Software Architecture

dreamt up by webguru in Uncategorized | Comments Off on Testing Made Easier Via Framework Minimalism And Software Architecture

Testing Made Easier Via Framework Minimalism And Software Architecture

Testing Made Easier Via Framework Minimalism And Software Architecture

Ryan Kay



Like many other Android developers, my initial foray into testing on the platform lead me to be immediately confronted with a demoralizing degree of jargon. Further, what few examples I came across at the time (circa 2015) did not present practical use cases which may have inclined me to think that the cost to benefit ratio of learning a tool like Espresso in order to verify that a TextView.setText(…) was working properly, was a reasonable investment.

To make matters even worse, I did not have a working understanding of software architecture in theory or practice, which meant that even if I bothered to learn these frameworks, I would have been writing tests for monolithic applications comprised of a few god classes, written in spaghetti code. The punchline is that building, testing, and maintaining such applications is an exercise in self-sabotage quite regardless of your framework expertise; yet this realization only becomes clear after one has built a modular, loosely-coupled, and highly-cohesive application.

From here we arrive at one of the main points of discussion in this article, which I will summarize in plain language here: Among the primary benefits of applying the golden principles of software architecture (do not worry, I will discuss them with simple examples and language), is that your code can become easier to test. There are other benefits to applying such principles, but the relationship between software architecture and testing is the focus of this article.

However, for the sake of those who wish to understand why and how we test our code, we will first explore the concept of testing by analogy; without requiring you to memorize any jargon. Before getting deeper into the primary topic, we will also look at the question of why so many testing frameworks exist, for in examining this we may begin to see their benefits, limitations, and perhaps even an alternative solution.

Testing: Why And How

This section will not be new information for any seasoned tester, but perhaps you may enjoy this analogy nonetheless. Of course I am a software engineer, not a rocket engineer, but for a moment I will borrow an analogy which relates to designing and building objects both in physical space, and in the memory space of a computer. It turns out that while the medium changes, the process is in principle quite the same.

Suppose for a moment that we are rocket engineers, and our job is to build the first stage* rocket booster of a space shuttle. Suppose as well, that we have come up with a serviceable design for the first stage to begin building and testing in various conditions.

“First stage” refers to boosters which are fired when the rocket is first launched

Before we get to the process, I would like to point out why I prefer this analogy: You should not have any difficulty answering the question of why we are bothering to test our design before putting it in situations where human lives are at stake. While I will not try to convince you that testing your applications before launch could save lives (although it is possible depending on the nature of the application), it could save ratings, reviews, and your job. In the broadest sense, testing is the way in which we make sure that single parts, several components, and whole systems work before we employ them in situations where it is critically important for them to not fail.

Returning to the how aspect of this analogy, I will introduce the process by which engineers go about testing a particular design: redundancy. Redundancy is simple in principle: Build copies of the component to be tested to the same design specification as what you wish to use at launch time. Test these copies in an isolated environment which strictly controls for preconditions and variables. While this does not guarantee that the rocket booster will work properly when integrated in the whole shuttle, one can be certain that if it does not work in a controlled environment, it will be very unlikely to work at all.

Suppose that of the hundreds, or perhaps thousands of variables which the copies of the rocket design have been tested against, it comes down to ambient temperatures in which the rocket booster will be test fired. Upon testing at 35° Celsius, we see that everything functions without error. Again, the rocket is tested at roughly room temperature without failure. The final test will be at the lowest recorded temperature for the launch site, at -5° Celcius. During this final test, the rocket fires, but after a short period, the rocket flares up and shortly thereafter explodes violently; but fortunately in a controlled and safe environment.

At this point, we know that changes in temperature appear to be at least involved in the failed test, which leads us to consider what parts of the rocket booster may be adversely effected by cold temperatures. Over time, it is discovered that one key component, a rubber O-ring which serves to staunch the flow of fuel from one compartment to another, becomes rigid and ineffectual when exposed to temperatures approaching or below freezing.

It is possible that you have noticed that his analogy is loosely based on the tragic events of the Challenger space shuttle disaster. For those unfamiliar, the sad truth (insofar as investigations concluded) is that there were plenty of failed tests and warnings from the engineers, and yet administrative and political concerns spurred the launch to proceed regardless. In any case, whether or not you have memorized the term redundancy, my hope is that you have grasped the fundamental process for testing parts of any kind of system.

Concerning Software

Whereas the prior analogy explained the fundamental process for testing rockets (while taking plenty of liberty with the finer details), I will now summarize in a manner which is likely more relevant to you and I. While it is possible to test software by only launching it to devices once it is in any sort of deployable state, I suppose instead that we can apply the principle of redundancy to the individual parts of the application first.

This means that we create copies of the smaller parts of the whole application (commonly referred to as Units of software), set up an isolated test environment, and see how they behave based on whatever variables, arguments, events, and responses which may occur at runtime. Testing is truly as simple as that in theory, but the key to even getting to this process lies in building applications which are feasibly testable. This comes down to two concerns which we will look at in the next two sections. The first concern has to do with the test environment, and the second concern has to do with the way in which we structure applications.

Why Do We Need Frameworks?

In order to test a piece of software (henceforth referred to as a Unit, although this definition is deliberately an over-simplification), it is necessary to have some kind of testing environment which allows you to interact with your software at runtime. For those building applications to be executed purely on a JVM (Java Virtual Machine) environment, all that is required to write tests is a JRE (Java Runtime Environment). Take for example this very simple Calculator class:

class Calculator {
    private int add(int a, int b){
        return a + b;
    }

    private int subtract(int a, int b){
        return a - b;
    }
}

In absence of any frameworks, as long as we have a test class which contains a mainfunction to actually execute our code, we can test it. As you may recall, the main function denotes the starting point of execution for a simple Java program. As for what we are testing for, we simply feed some test data into the Calculator’s functions and verify that it is performing basic arithmetic properly:

public class Main {

    public static void main(String[] args){
    //create a copy of the Unit to be tested
        Calculator calc = new Calculator();
    //create test conditions to verify behaviour
        int addTest = calc.add(2, 2);
        int subtractTest = calc.subtract(2, 2);

    //verify behaviour by assertion
        if (addTest == 4) System.out.println("addTest has passed.");
        else System.out.println("addTest has failed.");

        if (subtractTest == 0) System.out.println("subtractTest has passed.");
        else System.out.println("subtractTest has failed.");
    }
}

Testing an Android application is of course, a completely different procedure. Although there is a main function buried deep within the source of the ZygoteInit.java file (the finer details of which are not important here), which is invoked prior to an Android application being launched on the JVM, even a junior Android developer ought to know that the system itself is responsible for calling this function; not the developer. Instead, the entry points for Android applications happen to be the Application class, and any Activity classes which the system can be pointed to via the AndroidManifest.xml file.

All of this is just a lead up to the fact that testing Units in an Android application presents a greater level of complexity, strictly because our testing environment must now account for the Android platform.

Taming The Problem Of Tight Coupling

Tight coupling is a term which describes a function, class, or application module which is dependent on particular platforms, frameworks, languages, and libraries. It is a relative term, meaning that our Calculator.java example is tightly coupled to the Java programming language and standard library, but that is the extent of its coupling. Along the same lines, the problem of testing classes which are tightly coupled to the Android platform, is that you must find a way to work with or around the platform.

For classes tightly coupled to the Android platform, you have two options. The first, is to simply deploy your classes to an Android device (physical or virtual). While I do suggest that you test deploy your application code before shipping it to production, this is a highly inefficient approach during the early and middle stages of the development process with respect to time.

A Unit, however technical a definition you prefer, is generally thought of as a single function in a class (although some expand the definition to include subsequent helper functions which are called internally by the initial single function call). Either way, Units are meant to be small; building, compiling, and deploying an entire application to test a single Unit is to miss the point of testing in isolation entirely.

Another solution to the problem of tight coupling, is to use testing frameworks to interact with, or mock (simulate) platform dependencies. Frameworks such as Espresso and Robolectric give developers far more effective means for testing Units than the previous approach; the former being useful for tests run on a device (known as “instrumented tests” because apparently calling them device tests was not ambiguous enough) and the latter being capable of mocking the Android framework locally on a JVM.

Before I proceed to railing against such frameworks instead of the alternative I will discuss shortly, I want to be clear that I do not mean to imply that you should never use these options. The process which a developer uses to build and test their applications should be born of a combination of personal preference and an eye for efficiency.

For those who are not fond of building modular and loosely coupled applications, you will have no choice but to become familiar with these frameworks if you wish to have an adequate level of test coverage. Many wonderful applications have been built this way, and I am not infrequently accused of making my applications too modular and abstract. Whether you take my approach or decide to lean heavily on frameworks, I salute you for putting in the time and effort to test your applications.

Keep Your Frameworks At Arms Length

For the final preamble to the core lesson of this article, it is worth discussing why you might want to have an attitude of minimalism when it comes to using frameworks (and this applies to more than just testing frameworks). The subtitle above is a paraphrase from the magnanimous teacher of software best practices: Robert “Uncle Bob” C. Martin. Of the many gems he has given me since I first studied his works, this one took several years of direct experience to grasp.

Insofar As I understand what this statement is about, the cost of using frameworks is in the time investment required to learn and maintain them. Some of them change quite frequently and some of them do not change frequently enough. Functions become deprecated, frameworks cease to be maintained, and every 6-24 months a new framework arrives to supplant the last. Therefore, if you can find a solution which can be implemented as a platform or language feature (which tend to last much longer), it will tend to be more resistant to changes of the various types mentioned above.

On a more technical note, frameworks such as Espresso and to a lesser degree Robolectric, can never run as efficiently as simple JUnit tests, or even the framework free test from earlier on. While JUnit is indeed a framework, it is tightly coupled to the JVM, which tends to change at a much slower rate than the Android platform proper. Fewer frameworks almost invariably means code which is more efficient in terms of the time it takes to execute and write one or more tests.

From this, you can probably gather that we will now be discussing an approach which will leverage some techniques which allows us to keep the Android platform at arms length; all the while allowing us plenty of code coverage, test efficiency, and the opportunity to still use a framework here or there when the need arises.

The Art Of Architecture

To use a silly analogy, one might think of frameworks and platforms as being like overbearing colleagues who will take over your development process unless you set appropriate boundaries with them. The golden principles of software architecture can give you the general concepts and specific techniques necessary to both create and enforce these boundaries. As we will see in a moment, if you have ever wondered what the benefits of applying software architecture principles in your code truly are, some directly, and many indirectly make your code easier to test.

Separation Of Concerns

Separation Of Concerns is by my estimation the most universally applicable and useful concept in software architecture as a whole (without meaning to say that others should be neglected). Separation of concerns (SOC) can be applied, or completely ignored, across every perspective of software development I am aware of. To briefly summarize the concept, we will look at SOC when applied to classes, but be aware that SOC can be applied to functions through extensive usage of helper functions, and it can be extrapolated to entire modules of an application (“modules” used in the context of Android/Gradle).

If you have spent much time at all researching software architectural patterns for GUI applications, you will likely have come across at least one of: Model-View-Controller (MVC), Model-View-Presenter (MVP), or Model-View-ViewModel (MVVM). Having built applications in every style, I will say upfront that I do not consider any of them to be the single best option for all projects (or even features within a single project). Ironically, the pattern which the Android team presented some years ago as their recommended approach, MVVM, appears to be the least testable in absence of Android specific testing frameworks (assuming you wish to use the Android platform’s ViewModel classes, which I am admittedly a fan of).

In any case, the specifics of these patterns are less important than their generalities. All of these patterns are just different flavours of SOC which emphasize a fundamental separation of three kinds of code which I refer to as: Data, User Interface, Logic.

So, how exactly does separating Data, User Interface, and Logic help you to test your applications? The answer is that by pulling logic out of classes which must deal with platform/framework dependencies into classes which possess little or no platform/framework dependencies, testing becomes easy and framework minimal. To be clear, I am generally talking about classes which must render the user interface, store data in a SQL table, or connect to a remote server. To demonstrate how this works, let us look at a simplified three layer architecture of a hypothetical Android application.

The first class will manage our user interface. To keep things simple, I have used an Activity for this purpose, but I typically opt for Fragments instead as user interface classes. In either case, both classes present similar tight coupling to the Android platform:

public class CalculatorUserInterface extends Activity implements CalculatorContract.IUserInterface {

    private TextView display;
    private CalculatorContract.IControlLogic controlLogic;
    private final String INVALID_MESSAGE = "Invalid Expression.";

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        controlLogic = new DependencyProvider().provideControlLogic(this);

        display = findViewById(R.id.textViewDisplay);
        Button evaluate = findViewById(R.id.buttonEvaluate);
        evaluate.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                controlLogic.handleInput('=');
            }
        });
        //..bindings for the rest of the calculator buttons
    }

    @Override
    public void updateDisplay(String displayText) {
        display.setText(displayText);
    }

    @Override
    public String getDisplay() {
        return display.getText().toString();
    }

    @Override
    public void showError() {
        Toast.makeText(this, INVALID_MESSAGE, Toast.LENGTH_LONG).show();
    }
}

As you can see, the Activity has two jobs: First, since it is the entry point of a given feature of an Android application, it acts as a sort of container for the other components of the feature. In simple terms, a container can be thought of as a sort of root class which the other components are ultimately tethered to via references (or private member fields in this case). It also inflates, binds references, and adds listeners to the XML layout (the user interface).

Testing Control Logic

Rather than having the Activity possess a reference to a concrete class in the back end, we have it talk to an interface of type CalculatorContract.IControlLogic. We will discuss why this is an interface in the next section. For now, just understand that whatever is on the other side of that interface is supposed to be something like a Presenter or Controller. Since this class will be controlling interactions between the front-end Activity and the back-end Calculator, I have chosen to call it CalculatorControlLogic:

public class CalculatorControlLogic implements CalculatorContract.IControlLogic {

    private CalculatorContract.IUserInterface ui;
    private CalculatorContract.IComputationLogic comp;

    public CalculatorControlLogic(CalculatorContract.IUserInterface ui, CalculatorContract.IComputationLogic comp) {
        this.ui = ui;
        this.comp = comp;
    }

    @Override
    public void handleInput(char inputChar) {
        switch (inputChar){
            case '=':
                evaluateExpression();
                break;
            //...handle other input events
        }
    }
    private void evaluateExpression() {
        Optional result = comp.computeResult(ui.getDisplay());

        if (result.isPresent()) ui.updateDisplay(result.get());
        else ui.showError();
    }
}

There are many subtle things about the way in which this class is designed that make it easier to test. Firstly, all of its references are either from the Java standard library, or interfaces which are defined within the application. This means that testing this class without any frameworks is an absolute breeze, and it could be done locally on a JVM. Another small but useful tip is that all of the different interactions of this class can be called via a single generic handleInput(...) function. This provides a single entry point to test every behaviour of this class.

Also note that in the evaluateExpression() function, I am returning a class of type Optional<String> from the back end. Normally I would use what functional programmers call an Either Monad, or as I prefer to call it, a Result Wrapper. Whatever stupid name you use, it is an object which is capable of representing multiple different states through a single function call. Optional is a simpler construct which can represent either a null, or some value of the supplied generic type. In any case, since the back end might be given an invalid expression, we want to give the ControlLogicclass some means of determining the result of the backend operation; accounting for both success and failure. In this case, null will represent a failure.

Below is an example test class which has been written using JUnit, and a class which in testing jargon is called a Fake:

public class CalculatorControlLogicTest {

    @Test
    public void validExpressionTest() {

        CalculatorContract.IComputationLogic comp = new FakeComputationLogic();
        CalculatorContract.IUserInterface ui = new FakeUserInterface();
        CalculatorControlLogic controller = new CalculatorControlLogic(ui, comp);

        controller.handleInput('=');

        assertTrue(((FakeUserInterface) ui).displayUpdateCalled);
        assertTrue(((FakeUserInterface) ui).displayValueFinal.equals("10.0"));
        assertTrue(((FakeComputationLogic) comp).computeResultCalled);

    }

    @Test
    public void invalidExpressionTest() {

        CalculatorContract.IComputationLogic comp = new FakeComputationLogic();
        ((FakeComputationLogic) comp).returnEmpty = true;
        CalculatorContract.IUserInterface ui = new FakeUserInterface();
        ((FakeUserInterface) ui).displayValueInitial = "+7+7";
        CalculatorControlLogic controller = new CalculatorControlLogic(ui, comp);

        controller.handleInput('=');

        assertTrue(((FakeUserInterface) ui).showErrorCalled);
        assertTrue(((FakeComputationLogic) comp).computeResultCalled);

    }

    private class FakeUserInterface implements CalculatorContract.IUserInterface{
        boolean displayUpdateCalled = false;
        boolean showErrorCalled = false;
        String displayValueInitial = "5+5";
        String displayValueFinal = "";

        @Override
        public void updateDisplay(String displayText) {
            displayUpdateCalled = true;
            displayValueFinal = displayText;
        }

        @Override
        public String getDisplay() {
            return displayValueInitial;
        }

        @Override
        public void showError() {
            showErrorCalled = true;
        }
    }

    private class FakeComputationLogic implements CalculatorContract.IComputationLogic{
        boolean computeResultCalled = false;
        boolean returnEmpty = false;

        @Override
        public Optional computeResult(String expression) {
            computeResultCalled = true;
            if (returnEmpty) return Optional.empty();
            else return Optional.of("10.0");
        }
    }
}

As you can see, not only can this test suite be executed very rapidly, but it did not take very much time at all to write. In any case, we will now look at some more subtle things which made writing this test class very easy.

The Power Of Abstraction And Dependency Inversion

There are two other important concepts which have been applied to CalculatorControlLogic which have made it trivially easy to test. Firstly, if you have ever wondered what the benefits of using Interfaces and Abstract Classes (collectively referred to as abstractions) in Java are, the code above is a direct demonstration. Since the class to be tested references abstractions instead of concrete classes, we were able to create Fake test doubles for the user interface and back end from within our test class. As long as these test doubles implement the appropriate interfaces, CalculatorControlLogiccould not care less that they are not the real thing.

Secondly, CalculatorControlLogichas been given its dependencies via the constructor (yes, that is a form of Dependency Injection), instead of creating its own dependencies. Therefore, it does not need to be re-written when used in a production or testing environment, which is a bonus for efficiency.

Dependency Injection is a form of Inversion Of Control, which is a tricky concept to define in plain language. Whether you use Dependency Injection or a Service Locator Pattern, they both achieve what Martin Fowler (my favourite teacher on such topics) describes as “the principle of separating configuration from use.” This results in classes which are easier to test, and easier to build in isolation from one another.

Testing Computation Logic

Finally, we come to the ComputationLogic class, which is supposed to approximate an IO device such as an adapter to a remote server, or a local database. Since we need neither of those for a simple calculator, it will just be responsible for encapsulating the logic required to validate and evaluate the expressions we give it:

public class CalculatorComputationLogic implements CalculatorContract.IComputationLogic {

    private final char ADD = '+';
    private final char SUBTRACT = '-';
    private final char MULTIPLY = '*';
    private final char DIVIDE = '/';

    @Override
    public Optional computeResult(String expression) {
        if (hasOperator(expression)) return attemptEvaluation(expression);
        else return Optional.empty();

    }

    private Optional attemptEvaluation(String expression) {
        String delimiter = getOperator(expression);
        Binomial b = buildBinomial(expression, delimiter);
        return evaluateBinomial(b);
    }

    private Optional evaluateBinomial(Binomial b) {
        String result;
        switch (b.getOperatorChar()) {
            case ADD:
                result = Double.toString(b.firstTerm + b.secondTerm);
                break;
            case SUBTRACT:
                result = Double.toString(b.firstTerm - b.secondTerm);
                break;
            case MULTIPLY:
                result = Double.toString(b.firstTerm * b.secondTerm);
                break;
            case DIVIDE:
                result = Double.toString(b.firstTerm / b.secondTerm);
                break;
            default:
                return Optional.empty();
        }
        return Optional.of(result);
    }

    private Binomial buildBinomial(String expression, String delimiter) {
        String[] operands = expression.split(delimiter);
        return new Binomial(
                delimiter,
                Double.parseDouble(operands[0]),
                Double.parseDouble(operands[1])
        );
    }

    private String getOperator(String expression) {
        for (char c : expression.toCharArray()) {
            if (c == ADD || c == SUBTRACT || c == MULTIPLY || c == DIVIDE)
                return "" + c;
        }

        //default
        return "+";
    }

    private boolean hasOperator(String expression) {
        for (char c : expression.toCharArray()) {
            if (c == ADD || c == SUBTRACT || c == MULTIPLY || c == DIVIDE) return true;
        }
        return false;
    }

    private class Binomial {
        String operator;
        double firstTerm;
        double secondTerm;

        Binomial(String operator, double firstTerm, double secondTerm) {
            this.operator = operator;
            this.firstTerm = firstTerm;
            this.secondTerm = secondTerm;
        }

        char getOperatorChar(){
            return operator.charAt(operator.length() - 1);
        }
    }

    }

There is not too much to say about this class since typically there would be some tight coupling to a particular back-end library which would present similar problems as a class tightly coupled to Android. In a moment we will discuss what to do about such classes, but this one is so easy to test that we may as well have a try:

public class CalculatorComputationLogicTest {

    private CalculatorComputationLogic comp = new CalculatorComputationLogic();

    @Test
    public void additionTest() {
        String EXPRESSION = "5+5";
        String ANSWER = "10.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void subtractTest() {
        String EXPRESSION = "5-5";
        String ANSWER = "0.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void multiplyTest() {
        String EXPRESSION = "5*5";
        String ANSWER = "25.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void divideTest() {
        String EXPRESSION = "5/5";
        String ANSWER = "1.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void invalidTest() {
        String EXPRESSION = "Potato";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(!result.isPresent());
    }
}

The easiest classes to test, are those which are simply given some value or object, and are expected to return a result without the necessity of calling some external dependencies. In any case, there comes a point where no matter how much software architecture wizardry you apply, you will still need to worry about classes which cannot be decoupled from platforms and frameworks. Fortunately, there is still a way we can employ software architecture to: At worst make these classes easier to test, and at best, so trivially simple that testing can be done at a glance.

Humble Objects And Passive Views

The above two names refer to a pattern in which an object that must talk to low-level dependencies, is simplified so much that it arguably does not need to be tested. I was first introduced to this pattern via Martin Fowler’s blog on variations of Model-View-Presenter. Later on, through Robert C. Martin’s works, I was introduced to the idea of treating certain classes as Humble Objects, which implies that this pattern does not need to be limited to user interface classes (although I do not mean to say that Fowler ever implied such a limitation).

Whatever you choose to call this pattern, it is delightfully simple to understand, and in some sense I believe it is actually just the result of rigorously applying SOC to your classes. While this pattern applies also to back end classes, we will use our user interface class to demonstrate this principle in action. The separation is very simple: Classes which interact with platform and framework dependencies, do not think for themselves (hence the monikers Humble and Passive). When an event occurs, the only thing they do is forward the details of this event to whatever logic class happens to be listening:

//from CalculatorActivity's onCreate() function:
evaluate.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                controlLogic.handleInput('=');
            }
        });

The logic class, which should be trivially easy to test, is then responsible for controlling the user interface in a very fine-grained manner. Rather than calling a single generic updateUserInterface(...) function on the user interface class and leaving it to do the work of a bulk update, the user interface (or other such class) will possess small and specific functions which should be easy to name and implement:

//Interface functions of CalculatorActivity:
        @Override
    public void updateDisplay(String displayText) {
        display.setText(displayText);
    }

    @Override
    public String getDisplay() {
        return display.getText().toString();
    }

    @Override
    public void showError() {
        Toast.makeText(this, INVALID_MESSAGE, Toast.LENGTH_LONG).show();
    }
//…

In principal, these two examples ought to give you enough to understand how to go about implementing this pattern. The object which possesses the logic is loosely coupled, and the object which is tightly coupled to pesky dependencies becomes almost devoid of logic.

Now, at the start of this subsection, I made the statement that these classes become arguably unnecessary to test, and it is important we look at both sides of this argument. In an absolute sense, it is impossible to achieve 100% test coverage by employing this pattern, unless you still write tests for such humble/passive classes. It is also worth noting that my decision to use a Calculator as an example App, means that I cannot escape having a gigantic mass of findViewById(...) calls present in the Activity. Giant masses of repetitive code are a common cause of typing errors, and in the absence of some Android UI testing frameworks, my only recourse for testing would be via deploying the feature to a device and manually testing each interaction. Ouch.

It is at this point that I will humbly say that I do not know if 100% code coverage is absolutely necessary. I do not know many developers who strive for absolute test coverage in production code, and I have never done so myself. One day I might, but I will reserve my opinions on this matter until I have the reference experiences to back them up. In any case, I would argue that applying this pattern will still ultimately make it simpler and easier to test tightly coupled classes; if for no reason other than they become simpler to write.

Another objection to this approach, was raised by a fellow programmer when I described this approach in another context. The objection was that the logic class (whether it be a Controller, Presenter, or even a ViewModel depending on how you use it), becomes a God class.

While I do not agree with that sentiment, I do agree that the end result of applying this pattern is that your Logic classes become larger than if you left more decisions up to your user interface class.

This has never been an issue for me as I treat each feature of my applications as self-contained components, as opposed to having one giant controller for managing multiple user interface screens. In any case, I think this argument holds reasonably true if you fail to apply SOC to your front end or back end components. Therefore, my advice is to apply SOC to your front end and back end components quite rigorously.

Further Considerations

After all of this discussion on applying the principles of software architecture to reduce the necessity of using a wide-array of testing frameworks, improve the testability of classes in general, and a pattern which allows classes to be tested indirectly (at least to some degree), I am not actually here to tell you to stop using your preferred frameworks.

For those curious, I often use a library to generate mock classes for my Unit tests (for Java I prefer Mockito, but these days I mostly write Kotlin and prefer Mockk in that language), and JUnit is a framework which I use quite invariably. Since all of these options are coupled to languages as opposed to the Android platform, I can use them quite interchangeably across mobile and web application development. From time to time (if project requirements demand it), I will even use tools like Robolectric, MockWebServer, and in my five years of studying Android, I did begrudgingly use Espresso once.

My hope is that in reading this article, anyone who has experienced a similar degree of aversion to testing due to paralysis by jargon analysis, will come to see that getting started with testing really can be simple and framework minimal.

Further Reading on SmashingMag:

Smashing Editorial
(dm, il)

Source: Smashing Magazine, Testing Made Easier Via Framework Minimalism And Software Architecture

Testing Made Easier Via Framework Minimalism And Software Architecture

dreamt up by webguru in Uncategorized | Comments Off on Testing Made Easier Via Framework Minimalism And Software Architecture

Testing Made Easier Via Framework Minimalism And Software Architecture

Testing Made Easier Via Framework Minimalism And Software Architecture

Ryan Kay



Like many other Android developers, my initial foray into testing on the platform lead me to be immediately confronted with a demoralizing degree of jargon. Further, what few examples I came across at the time (circa 2015) did not present practical use cases which may have inclined me to think that the cost to benefit ratio of learning a tool like Espresso in order to verify that a TextView.setText(…) was working properly, was a reasonable investment.

To make matters even worse, I did not have a working understanding of software architecture in theory or practice, which meant that even if I bothered to learn these frameworks, I would have been writing tests for monolithic applications comprised of a few god classes, written in spaghetti code. The punchline is that building, testing, and maintaining such applications is an exercise in self-sabotage quite regardless of your framework expertise; yet this realization only becomes clear after one has built a modular, loosely-coupled, and highly-cohesive application.

From here we arrive at one of the main points of discussion in this article, which I will summarize in plain language here: Among the primary benefits of applying the golden principles of software architecture (do not worry, I will discuss them with simple examples and language), is that your code can become easier to test. There are other benefits to applying such principles, but the relationship between software architecture and testing is the focus of this article.

However, for the sake of those who wish to understand why and how we test our code, we will first explore the concept of testing by analogy; without requiring you to memorize any jargon. Before getting deeper into the primary topic, we will also look at the question of why so many testing frameworks exist, for in examining this we may begin to see their benefits, limitations, and perhaps even an alternative solution.

Testing: Why And How

This section will not be new information for any seasoned tester, but perhaps you may enjoy this analogy nonetheless. Of course I am a software engineer, not a rocket engineer, but for a moment I will borrow an analogy which relates to designing and building objects both in physical space, and in the memory space of a computer. It turns out that while the medium changes, the process is in principle quite the same.

Suppose for a moment that we are rocket engineers, and our job is to build the first stage* rocket booster of a space shuttle. Suppose as well, that we have come up with a serviceable design for the first stage to begin building and testing in various conditions.

“First stage” refers to boosters which are fired when the rocket is first launched

Before we get to the process, I would like to point out why I prefer this analogy: You should not have any difficulty answering the question of why we are bothering to test our design before putting it in situations where human lives are at stake. While I will not try to convince you that testing your applications before launch could save lives (although it is possible depending on the nature of the application), it could save ratings, reviews, and your job. In the broadest sense, testing is the way in which we make sure that single parts, several components, and whole systems work before we employ them in situations where it is critically important for them to not fail.

Returning to the how aspect of this analogy, I will introduce the process by which engineers go about testing a particular design: redundancy. Redundancy is simple in principle: Build copies of the component to be tested to the same design specification as what you wish to use at launch time. Test these copies in an isolated environment which strictly controls for preconditions and variables. While this does not guarantee that the rocket booster will work properly when integrated in the whole shuttle, one can be certain that if it does not work in a controlled environment, it will be very unlikely to work at all.

Suppose that of the hundreds, or perhaps thousands of variables which the copies of the rocket design have been tested against, it comes down to ambient temperatures in which the rocket booster will be test fired. Upon testing at 35° Celsius, we see that everything functions without error. Again, the rocket is tested at roughly room temperature without failure. The final test will be at the lowest recorded temperature for the launch site, at -5° Celcius. During this final test, the rocket fires, but after a short period, the rocket flares up and shortly thereafter explodes violently; but fortunately in a controlled and safe environment.

At this point, we know that changes in temperature appear to be at least involved in the failed test, which leads us to consider what parts of the rocket booster may be adversely effected by cold temperatures. Over time, it is discovered that one key component, a rubber O-ring which serves to staunch the flow of fuel from one compartment to another, becomes rigid and ineffectual when exposed to temperatures approaching or below freezing.

It is possible that you have noticed that his analogy is loosely based on the tragic events of the Challenger space shuttle disaster. For those unfamiliar, the sad truth (insofar as investigations concluded) is that there were plenty of failed tests and warnings from the engineers, and yet administrative and political concerns spurred the launch to proceed regardless. In any case, whether or not you have memorized the term redundancy, my hope is that you have grasped the fundamental process for testing parts of any kind of system.

Concerning Software

Whereas the prior analogy explained the fundamental process for testing rockets (while taking plenty of liberty with the finer details), I will now summarize in a manner which is likely more relevant to you and I. While it is possible to test software by only launching it to devices once it is in any sort of deployable state, I suppose instead that we can apply the principle of redundancy to the individual parts of the application first.

This means that we create copies of the smaller parts of the whole application (commonly referred to as Units of software), set up an isolated test environment, and see how they behave based on whatever variables, arguments, events, and responses which may occur at runtime. Testing is truly as simple as that in theory, but the key to even getting to this process lies in building applications which are feasibly testable. This comes down to two concerns which we will look at in the next two sections. The first concern has to do with the test environment, and the second concern has to do with the way in which we structure applications.

Why Do We Need Frameworks?

In order to test a piece of software (henceforth referred to as a Unit, although this definition is deliberately an over-simplification), it is necessary to have some kind of testing environment which allows you to interact with your software at runtime. For those building applications to be executed purely on a JVM (Java Virtual Machine) environment, all that is required to write tests is a JRE (Java Runtime Environment). Take for example this very simple Calculator class:

class Calculator {
    private int add(int a, int b){
        return a + b;
    }

    private int subtract(int a, int b){
        return a - b;
    }
}

In absence of any frameworks, as long as we have a test class which contains a mainfunction to actually execute our code, we can test it. As you may recall, the main function denotes the starting point of execution for a simple Java program. As for what we are testing for, we simply feed some test data into the Calculator’s functions and verify that it is performing basic arithmetic properly:

public class Main {

    public static void main(String[] args){
    //create a copy of the Unit to be tested
        Calculator calc = new Calculator();
    //create test conditions to verify behaviour
        int addTest = calc.add(2, 2);
        int subtractTest = calc.subtract(2, 2);

    //verify behaviour by assertion
        if (addTest == 4) System.out.println("addTest has passed.");
        else System.out.println("addTest has failed.");

        if (subtractTest == 0) System.out.println("subtractTest has passed.");
        else System.out.println("subtractTest has failed.");
    }
}

Testing an Android application is of course, a completely different procedure. Although there is a main function buried deep within the source of the ZygoteInit.java file (the finer details of which are not important here), which is invoked prior to an Android application being launched on the JVM, even a junior Android developer ought to know that the system itself is responsible for calling this function; not the developer. Instead, the entry points for Android applications happen to be the Application class, and any Activity classes which the system can be pointed to via the AndroidManifest.xml file.

All of this is just a lead up to the fact that testing Units in an Android application presents a greater level of complexity, strictly because our testing environment must now account for the Android platform.

Taming The Problem Of Tight Coupling

Tight coupling is a term which describes a function, class, or application module which is dependent on particular platforms, frameworks, languages, and libraries. It is a relative term, meaning that our Calculator.java example is tightly coupled to the Java programming language and standard library, but that is the extent of its coupling. Along the same lines, the problem of testing classes which are tightly coupled to the Android platform, is that you must find a way to work with or around the platform.

For classes tightly coupled to the Android platform, you have two options. The first, is to simply deploy your classes to an Android device (physical or virtual). While I do suggest that you test deploy your application code before shipping it to production, this is a highly inefficient approach during the early and middle stages of the development process with respect to time.

A Unit, however technical a definition you prefer, is generally thought of as a single function in a class (although some expand the definition to include subsequent helper functions which are called internally by the initial single function call). Either way, Units are meant to be small; building, compiling, and deploying an entire application to test a single Unit is to miss the point of testing in isolation entirely.

Another solution to the problem of tight coupling, is to use testing frameworks to interact with, or mock (simulate) platform dependencies. Frameworks such as Espresso and Robolectric give developers far more effective means for testing Units than the previous approach; the former being useful for tests run on a device (known as “instrumented tests” because apparently calling them device tests was not ambiguous enough) and the latter being capable of mocking the Android framework locally on a JVM.

Before I proceed to railing against such frameworks instead of the alternative I will discuss shortly, I want to be clear that I do not mean to imply that you should never use these options. The process which a developer uses to build and test their applications should be born of a combination of personal preference and an eye for efficiency.

For those who are not fond of building modular and loosely coupled applications, you will have no choice but to become familiar with these frameworks if you wish to have an adequate level of test coverage. Many wonderful applications have been built this way, and I am not infrequently accused of making my applications too modular and abstract. Whether you take my approach or decide to lean heavily on frameworks, I salute you for putting in the time and effort to test your applications.

Keep Your Frameworks At Arms Length

For the final preamble to the core lesson of this article, it is worth discussing why you might want to have an attitude of minimalism when it comes to using frameworks (and this applies to more than just testing frameworks). The subtitle above is a paraphrase from the magnanimous teacher of software best practices: Robert “Uncle Bob” C. Martin. Of the many gems he has given me since I first studied his works, this one took several years of direct experience to grasp.

Insofar As I understand what this statement is about, the cost of using frameworks is in the time investment required to learn and maintain them. Some of them change quite frequently and some of them do not change frequently enough. Functions become deprecated, frameworks cease to be maintained, and every 6-24 months a new framework arrives to supplant the last. Therefore, if you can find a solution which can be implemented as a platform or language feature (which tend to last much longer), it will tend to be more resistant to changes of the various types mentioned above.

On a more technical note, frameworks such as Espresso and to a lesser degree Robolectric, can never run as efficiently as simple JUnit tests, or even the framework free test from earlier on. While JUnit is indeed a framework, it is tightly coupled to the JVM, which tends to change at a much slower rate than the Android platform proper. Fewer frameworks almost invariably means code which is more efficient in terms of the time it takes to execute and write one or more tests.

From this, you can probably gather that we will now be discussing an approach which will leverage some techniques which allows us to keep the Android platform at arms length; all the while allowing us plenty of code coverage, test efficiency, and the opportunity to still use a framework here or there when the need arises.

The Art Of Architecture

To use a silly analogy, one might think of frameworks and platforms as being like overbearing colleagues who will take over your development process unless you set appropriate boundaries with them. The golden principles of software architecture can give you the general concepts and specific techniques necessary to both create and enforce these boundaries. As we will see in a moment, if you have ever wondered what the benefits of applying software architecture principles in your code truly are, some directly, and many indirectly make your code easier to test.

Separation Of Concerns

Separation Of Concerns is by my estimation the most universally applicable and useful concept in software architecture as a whole (without meaning to say that others should be neglected). Separation of concerns (SOC) can be applied, or completely ignored, across every perspective of software development I am aware of. To briefly summarize the concept, we will look at SOC when applied to classes, but be aware that SOC can be applied to functions through extensive usage of helper functions, and it can be extrapolated to entire modules of an application (“modules” used in the context of Android/Gradle).

If you have spent much time at all researching software architectural patterns for GUI applications, you will likely have come across at least one of: Model-View-Controller (MVC), Model-View-Presenter (MVP), or Model-View-ViewModel (MVVM). Having built applications in every style, I will say upfront that I do not consider any of them to be the single best option for all projects (or even features within a single project). Ironically, the pattern which the Android team presented some years ago as their recommended approach, MVVM, appears to be the least testable in absence of Android specific testing frameworks (assuming you wish to use the Android platform’s ViewModel classes, which I am admittedly a fan of).

In any case, the specifics of these patterns are less important than their generalities. All of these patterns are just different flavours of SOC which emphasize a fundamental separation of three kinds of code which I refer to as: Data, User Interface, Logic.

So, how exactly does separating Data, User Interface, and Logic help you to test your applications? The answer is that by pulling logic out of classes which must deal with platform/framework dependencies into classes which possess little or no platform/framework dependencies, testing becomes easy and framework minimal. To be clear, I am generally talking about classes which must render the user interface, store data in a SQL table, or connect to a remote server. To demonstrate how this works, let us look at a simplified three layer architecture of a hypothetical Android application.

The first class will manage our user interface. To keep things simple, I have used an Activity for this purpose, but I typically opt for Fragments instead as user interface classes. In either case, both classes present similar tight coupling to the Android platform:

public class CalculatorUserInterface extends Activity implements CalculatorContract.IUserInterface {

    private TextView display;
    private CalculatorContract.IControlLogic controlLogic;
    private final String INVALID_MESSAGE = "Invalid Expression.";

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        controlLogic = new DependencyProvider().provideControlLogic(this);

        display = findViewById(R.id.textViewDisplay);
        Button evaluate = findViewById(R.id.buttonEvaluate);
        evaluate.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                controlLogic.handleInput('=');
            }
        });
        //..bindings for the rest of the calculator buttons
    }

    @Override
    public void updateDisplay(String displayText) {
        display.setText(displayText);
    }

    @Override
    public String getDisplay() {
        return display.getText().toString();
    }

    @Override
    public void showError() {
        Toast.makeText(this, INVALID_MESSAGE, Toast.LENGTH_LONG).show();
    }
}

As you can see, the Activity has two jobs: First, since it is the entry point of a given feature of an Android application, it acts as a sort of container for the other components of the feature. In simple terms, a container can be thought of as a sort of root class which the other components are ultimately tethered to via references (or private member fields in this case). It also inflates, binds references, and adds listeners to the XML layout (the user interface).

Testing Control Logic

Rather than having the Activity possess a reference to a concrete class in the back end, we have it talk to an interface of type CalculatorContract.IControlLogic. We will discuss why this is an interface in the next section. For now, just understand that whatever is on the other side of that interface is supposed to be something like a Presenter or Controller. Since this class will be controlling interactions between the front-end Activity and the back-end Calculator, I have chosen to call it CalculatorControlLogic:

public class CalculatorControlLogic implements CalculatorContract.IControlLogic {

    private CalculatorContract.IUserInterface ui;
    private CalculatorContract.IComputationLogic comp;

    public CalculatorControlLogic(CalculatorContract.IUserInterface ui, CalculatorContract.IComputationLogic comp) {
        this.ui = ui;
        this.comp = comp;
    }

    @Override
    public void handleInput(char inputChar) {
        switch (inputChar){
            case '=':
                evaluateExpression();
                break;
            //...handle other input events
        }
    }
    private void evaluateExpression() {
        Optional result = comp.computeResult(ui.getDisplay());

        if (result.isPresent()) ui.updateDisplay(result.get());
        else ui.showError();
    }
}

There are many subtle things about the way in which this class is designed that make it easier to test. Firstly, all of its references are either from the Java standard library, or interfaces which are defined within the application. This means that testing this class without any frameworks is an absolute breeze, and it could be done locally on a JVM. Another small but useful tip is that all of the different interactions of this class can be called via a single generic handleInput(...) function. This provides a single entry point to test every behaviour of this class.

Also note that in the evaluateExpression() function, I am returning a class of type Optional<String> from the back end. Normally I would use what functional programmers call an Either Monad, or as I prefer to call it, a Result Wrapper. Whatever stupid name you use, it is an object which is capable of representing multiple different states through a single function call. Optional is a simpler construct which can represent either a null, or some value of the supplied generic type. In any case, since the back end might be given an invalid expression, we want to give the ControlLogicclass some means of determining the result of the backend operation; accounting for both success and failure. In this case, null will represent a failure.

Below is an example test class which has been written using JUnit, and a class which in testing jargon is called a Fake:

public class CalculatorControlLogicTest {

    @Test
    public void validExpressionTest() {

        CalculatorContract.IComputationLogic comp = new FakeComputationLogic();
        CalculatorContract.IUserInterface ui = new FakeUserInterface();
        CalculatorControlLogic controller = new CalculatorControlLogic(ui, comp);

        controller.handleInput('=');

        assertTrue(((FakeUserInterface) ui).displayUpdateCalled);
        assertTrue(((FakeUserInterface) ui).displayValueFinal.equals("10.0"));
        assertTrue(((FakeComputationLogic) comp).computeResultCalled);

    }

    @Test
    public void invalidExpressionTest() {

        CalculatorContract.IComputationLogic comp = new FakeComputationLogic();
        ((FakeComputationLogic) comp).returnEmpty = true;
        CalculatorContract.IUserInterface ui = new FakeUserInterface();
        ((FakeUserInterface) ui).displayValueInitial = "+7+7";
        CalculatorControlLogic controller = new CalculatorControlLogic(ui, comp);

        controller.handleInput('=');

        assertTrue(((FakeUserInterface) ui).showErrorCalled);
        assertTrue(((FakeComputationLogic) comp).computeResultCalled);

    }

    private class FakeUserInterface implements CalculatorContract.IUserInterface{
        boolean displayUpdateCalled = false;
        boolean showErrorCalled = false;
        String displayValueInitial = "5+5";
        String displayValueFinal = "";

        @Override
        public void updateDisplay(String displayText) {
            displayUpdateCalled = true;
            displayValueFinal = displayText;
        }

        @Override
        public String getDisplay() {
            return displayValueInitial;
        }

        @Override
        public void showError() {
            showErrorCalled = true;
        }
    }

    private class FakeComputationLogic implements CalculatorContract.IComputationLogic{
        boolean computeResultCalled = false;
        boolean returnEmpty = false;

        @Override
        public Optional computeResult(String expression) {
            computeResultCalled = true;
            if (returnEmpty) return Optional.empty();
            else return Optional.of("10.0");
        }
    }
}

As you can see, not only can this test suite be executed very rapidly, but it did not take very much time at all to write. In any case, we will now look at some more subtle things which made writing this test class very easy.

The Power Of Abstraction And Dependency Inversion

There are two other important concepts which have been applied to CalculatorControlLogic which have made it trivially easy to test. Firstly, if you have ever wondered what the benefits of using Interfaces and Abstract Classes (collectively referred to as abstractions) in Java are, the code above is a direct demonstration. Since the class to be tested references abstractions instead of concrete classes, we were able to create Fake test doubles for the user interface and back end from within our test class. As long as these test doubles implement the appropriate interfaces, CalculatorControlLogiccould not care less that they are not the real thing.

Secondly, CalculatorControlLogichas been given its dependencies via the constructor (yes, that is a form of Dependency Injection), instead of creating its own dependencies. Therefore, it does not need to be re-written when used in a production or testing environment, which is a bonus for efficiency.

Dependency Injection is a form of Inversion Of Control, which is a tricky concept to define in plain language. Whether you use Dependency Injection or a Service Locator Pattern, they both achieve what Martin Fowler (my favourite teacher on such topics) describes as “the principle of separating configuration from use.” This results in classes which are easier to test, and easier to build in isolation from one another.

Testing Computation Logic

Finally, we come to the ComputationLogic class, which is supposed to approximate an IO device such as an adapter to a remote server, or a local database. Since we need neither of those for a simple calculator, it will just be responsible for encapsulating the logic required to validate and evaluate the expressions we give it:

public class CalculatorComputationLogic implements CalculatorContract.IComputationLogic {

    private final char ADD = '+';
    private final char SUBTRACT = '-';
    private final char MULTIPLY = '*';
    private final char DIVIDE = '/';

    @Override
    public Optional computeResult(String expression) {
        if (hasOperator(expression)) return attemptEvaluation(expression);
        else return Optional.empty();

    }

    private Optional attemptEvaluation(String expression) {
        String delimiter = getOperator(expression);
        Binomial b = buildBinomial(expression, delimiter);
        return evaluateBinomial(b);
    }

    private Optional evaluateBinomial(Binomial b) {
        String result;
        switch (b.getOperatorChar()) {
            case ADD:
                result = Double.toString(b.firstTerm + b.secondTerm);
                break;
            case SUBTRACT:
                result = Double.toString(b.firstTerm - b.secondTerm);
                break;
            case MULTIPLY:
                result = Double.toString(b.firstTerm * b.secondTerm);
                break;
            case DIVIDE:
                result = Double.toString(b.firstTerm / b.secondTerm);
                break;
            default:
                return Optional.empty();
        }
        return Optional.of(result);
    }

    private Binomial buildBinomial(String expression, String delimiter) {
        String[] operands = expression.split(delimiter);
        return new Binomial(
                delimiter,
                Double.parseDouble(operands[0]),
                Double.parseDouble(operands[1])
        );
    }

    private String getOperator(String expression) {
        for (char c : expression.toCharArray()) {
            if (c == ADD || c == SUBTRACT || c == MULTIPLY || c == DIVIDE)
                return "" + c;
        }

        //default
        return "+";
    }

    private boolean hasOperator(String expression) {
        for (char c : expression.toCharArray()) {
            if (c == ADD || c == SUBTRACT || c == MULTIPLY || c == DIVIDE) return true;
        }
        return false;
    }

    private class Binomial {
        String operator;
        double firstTerm;
        double secondTerm;

        Binomial(String operator, double firstTerm, double secondTerm) {
            this.operator = operator;
            this.firstTerm = firstTerm;
            this.secondTerm = secondTerm;
        }

        char getOperatorChar(){
            return operator.charAt(operator.length() - 1);
        }
    }

    }

There is not too much to say about this class since typically there would be some tight coupling to a particular back-end library which would present similar problems as a class tightly coupled to Android. In a moment we will discuss what to do about such classes, but this one is so easy to test that we may as well have a try:

public class CalculatorComputationLogicTest {

    private CalculatorComputationLogic comp = new CalculatorComputationLogic();

    @Test
    public void additionTest() {
        String EXPRESSION = "5+5";
        String ANSWER = "10.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void subtractTest() {
        String EXPRESSION = "5-5";
        String ANSWER = "0.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void multiplyTest() {
        String EXPRESSION = "5*5";
        String ANSWER = "25.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void divideTest() {
        String EXPRESSION = "5/5";
        String ANSWER = "1.0";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(result.isPresent());
        assertEquals(result.get(), ANSWER);
    }

    @Test
    public void invalidTest() {
        String EXPRESSION = "Potato";

        Optional result = comp.computeResult(EXPRESSION);

        assertTrue(!result.isPresent());
    }
}

The easiest classes to test, are those which are simply given some value or object, and are expected to return a result without the necessity of calling some external dependencies. In any case, there comes a point where no matter how much software architecture wizardry you apply, you will still need to worry about classes which cannot be decoupled from platforms and frameworks. Fortunately, there is still a way we can employ software architecture to: At worst make these classes easier to test, and at best, so trivially simple that testing can be done at a glance.

Humble Objects And Passive Views

The above two names refer to a pattern in which an object that must talk to low-level dependencies, is simplified so much that it arguably does not need to be tested. I was first introduced to this pattern via Martin Fowler’s blog on variations of Model-View-Presenter. Later on, through Robert C. Martin’s works, I was introduced to the idea of treating certain classes as Humble Objects, which implies that this pattern does not need to be limited to user interface classes (although I do not mean to say that Fowler ever implied such a limitation).

Whatever you choose to call this pattern, it is delightfully simple to understand, and in some sense I believe it is actually just the result of rigorously applying SOC to your classes. While this pattern applies also to back end classes, we will use our user interface class to demonstrate this principle in action. The separation is very simple: Classes which interact with platform and framework dependencies, do not think for themselves (hence the monikers Humble and Passive). When an event occurs, the only thing they do is forward the details of this event to whatever logic class happens to be listening:

//from CalculatorActivity's onCreate() function:
evaluate.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View view) {
                controlLogic.handleInput('=');
            }
        });

The logic class, which should be trivially easy to test, is then responsible for controlling the user interface in a very fine-grained manner. Rather than calling a single generic updateUserInterface(...) function on the user interface class and leaving it to do the work of a bulk update, the user interface (or other such class) will possess small and specific functions which should be easy to name and implement:

//Interface functions of CalculatorActivity:
        @Override
    public void updateDisplay(String displayText) {
        display.setText(displayText);
    }

    @Override
    public String getDisplay() {
        return display.getText().toString();
    }

    @Override
    public void showError() {
        Toast.makeText(this, INVALID_MESSAGE, Toast.LENGTH_LONG).show();
    }
//…

In principal, these two examples ought to give you enough to understand how to go about implementing this pattern. The object which possesses the logic is loosely coupled, and the object which is tightly coupled to pesky dependencies becomes almost devoid of logic.

Now, at the start of this subsection, I made the statement that these classes become arguably unnecessary to test, and it is important we look at both sides of this argument. In an absolute sense, it is impossible to achieve 100% test coverage by employing this pattern, unless you still write tests for such humble/passive classes. It is also worth noting that my decision to use a Calculator as an example App, means that I cannot escape having a gigantic mass of findViewById(...) calls present in the Activity. Giant masses of repetitive code are a common cause of typing errors, and in the absence of some Android UI testing frameworks, my only recourse for testing would be via deploying the feature to a device and manually testing each interaction. Ouch.

It is at this point that I will humbly say that I do not know if 100% code coverage is absolutely necessary. I do not know many developers who strive for absolute test coverage in production code, and I have never done so myself. One day I might, but I will reserve my opinions on this matter until I have the reference experiences to back them up. In any case, I would argue that applying this pattern will still ultimately make it simpler and easier to test tightly coupled classes; if for no reason other than they become simpler to write.

Another objection to this approach, was raised by a fellow programmer when I described this approach in another context. The objection was that the logic class (whether it be a Controller, Presenter, or even a ViewModel depending on how you use it), becomes a God class.

While I do not agree with that sentiment, I do agree that the end result of applying this pattern is that your Logic classes become larger than if you left more decisions up to your user interface class.

This has never been an issue for me as I treat each feature of my applications as self-contained components, as opposed to having one giant controller for managing multiple user interface screens. In any case, I think this argument holds reasonably true if you fail to apply SOC to your front end or back end components. Therefore, my advice is to apply SOC to your front end and back end components quite rigorously.

Further Considerations

After all of this discussion on applying the principles of software architecture to reduce the necessity of using a wide-array of testing frameworks, improve the testability of classes in general, and a pattern which allows classes to be tested indirectly (at least to some degree), I am not actually here to tell you to stop using your preferred frameworks.

For those curious, I often use a library to generate mock classes for my Unit tests (for Java I prefer Mockito, but these days I mostly write Kotlin and prefer Mockk in that language), and JUnit is a framework which I use quite invariably. Since all of these options are coupled to languages as opposed to the Android platform, I can use them quite interchangeably across mobile and web application development. From time to time (if project requirements demand it), I will even use tools like Robolectric, MockWebServer, and in my five years of studying Android, I did begrudgingly use Espresso once.

My hope is that in reading this article, anyone who has experienced a similar degree of aversion to testing due to paralysis by jargon analysis, will come to see that getting started with testing really can be simple and framework minimal.

Further Reading on SmashingMag:

Smashing Editorial
(dm, il)

Source: Smashing Magazine, Testing Made Easier Via Framework Minimalism And Software Architecture

Bringing A Better Design Process To Your Organization

dreamt up by webguru in Uncategorized | Comments Off on Bringing A Better Design Process To Your Organization

Bringing A Better Design Process To Your Organization

Bringing A Better Design Process To Your Organization

Eric Olive



As user experience (UX) designers and researchers, the most common complaint we hear from users is:

“Why don’t they think about what I need?”

In fact, many organizations have teams dedicated to delivering what users and customers need. More and more software developers are eager to work with UX designers in order to code interfaces that customers will use and understand. The problem is that complex software projects can easily become bogged down in competing priorities and confusion about what to do next.

The result is poor design that impedes productivity. For example, efficiency in healthcare is hampered by electronic medical records (EMRs). Complaints about these software systems are legion. Dr. Steven Ugent, a Boston-based dermatologist and Yale Medical School alumnus, is no exception.

Since 2010, Dr. Ugent has used two EMR systems: Before 2010, he finished work promptly at 5:15 every day. Since he and his colleagues started using EMRs, he typically works an extra half hour to 1.5 hours in the evening. “I don’t know any doctor who is happy with their medical records system. The crazy thing is that I was much more efficient when I was using pen and paper.”

man holding pen on paper

The EMR systems are clunky, inflexible, and hard to customize. Ugent, for instance, cannot embed photos directly into his chart notes. Instead, he must open the folder with the photo of the mole and then open a different folder to see the text. This setup is particularly cumbersome for dermatologists who rely heavily on photos when treating patients.

Ugent succinctly summarizes the problem with EMRs:

“The people who design it [the EMR system] don’t understand my workflow. If they did, they would design a different system.”

Doctors are not alone in their frustration with clunky software. Consumers and professionals around the world make similar complaints:

“Why can’t I find what I need?”

“Why do they make it so hard?”

“Why do I have to create a login when I simply want to buy this product. I’m giving them money. Isn’t that enough?”

A major contributor to clunky software is flawed design processes. In this article, we’ll outline four design process problems and explain how to address them.

  1. Complexity
  2. Next-Release Syndrome
  3. Insufficient Time For Design Iterations
  4. Surrendering Control To Outside Vendors

1. Complexity

Scale, multiple stakeholders, and the need for sophisticated code are among the many factors contributing to the complexity of large software projects.

Sometimes overlooked, however, are complex laws and regulations. For example, insurance is heavily regulated at the state level, adding a layer of complexity for insurance companies operating in multiple states. Banks and credit unions are subject to regulation while utilities must comply with state and federal environmental laws.

Healthcare products and software subject to FDA regulations offer an even greater challenge. The problem isn’t that the regulations are unreasonable. Safety is paramount. The issues are time, budget, and the planning necessary to meet FDA requirements.

As Jeff Horvath, Ph.D., a UX consultant with extensive experience in healthcare, explains: “These requirements add a couple of orders of magnitude to the rigor for writing test protocols, test setup, data gathering, analysis, quality controls, and getting approval to conduct the research in the first place.” For example, a single round of usability testing jumps from six weeks (a reasonable time frame for a standard usability test) to six months. And that’s with a single round of usability testing. Often, two or more rounds of testing are necessary.

This level of rigor is a wakeup call for companies new to working with the FDA. More than once, Horvath has faced tough conversations with clients who were unprepared for the extended timelines and additional budget necessary to meet FDA requirements. It’s hard, but necessary. “It pays to be thorough,” says Horvath. In 2018 the FDA approved a mere 11% of pre-market submissions.

The demands on researchers, designers, and developers are higher for healthcare software requiring FDA compliance than for traditional software products. For example:

  • A UX researcher can only conduct one or two usability test sessions per day as opposed to the more common five to six sessions per day for standard software.
  • UX designers must remain hyper-attentive to every aspect of the user’s interaction with software. Even one confusing interaction could cause a clinician to make an error that could jeopardize a patient’s health. For the same reason, UI designers must draw interfaces that remain faithful to every interaction.
  • A longer time frame for design and usability testing means that the developer’s coding efforts must be planned carefully. Skilled and well-intentioned developers are often eager to modify the code as soon as new information becomes available. While this approach can work in organizations well practiced in rapid iteration, it carries risk when designing and coding complex systems.

Failure to manage complexity can have fatal consequences as happened when Danielle McCray was admitted to Tallahassee Memorial Hospital as she was about to give birth. To ease her discomfort, healthcare workers connected her to a patient-controlled analgesia machine, a programmable infusion pump.

Eight hours later McCray was pronounced dead from a morphine overdose. A major factor in this tragedy was the flawed design of the infusion pump used to administer medication. The pump required 27 programming steps. Failure to address such complexity by designing a more intuitive user interface contributed to unnecessary death.

Solution

The solution is to acknowledge and address complexity This point sounds logical. Yet, as explained above, complicated FDA regulations often surprise company leaders. Denial doesn’t work. Failing to plan means your organization will likely fall into the 89% of pre-market submissions the FDA rejected in 2018.

When conducting usability tests, user experience researchers must take three steps to manage the complexity associated with FDA regulations:

  1. The moderator (the person who runs the usability test) must be hyper-attentive. For example, if an MRI scan requires a technician to follow a strict sequence of steps while using the associated software, the moderator must observe carefully to determine if the participant follows the instructions to the letter. If not, the task is rated as a failure meaning that both the interface design and associated documentation will require modification;
  2. The moderator must also track close calls. For example, a participant might initially perform the steps out of order, discover the mistake, and recover by following the proper sequence. The FDA considers this a near miss, and the moderator must report it as such;
  3. The moderator must also assess the participant’s knowledge. Does she believe that she has followed the proper sequence? Is this belief accurate?

2. Next-Release Syndrome

One factor in the failure to acknowledge complexity is a fix-it-later mindset we call next-release syndrome. Software bugs are not a problem because “we’ll fix that in the next release.” The emphasis on speed over quality and safety makes it all too easy to postpone solving the hard problems.

Anyone involved in product design and development must tackle next-release syndrome. Two examples make the point:

  • We discovered serious design flaws with a client’s healthcare tracking software. The company chose to release the software without addressing these problems. Not surprisingly, customers were unhappy.
  • We conducted usability tests for a large credit union based in the U.S. The participants were seasoned financial advisers. Testing revealed serious design flaws including confusing status icons, buttons with an unclear purpose, and a nearly hidden link that prevented participants from displaying important data. Remember, if the user doesn’t see it, it’s not there. When we reported the findings, the response was: “We’ll fix that in the next release.” As expected, the web application was not well received. Responses from users included: “Why did you ask us to review the app if you had no intention of making changes?”

Solution: Reject The Fix-It-Next-Time Mentality

The solution is to address serious design problems now. Sounds straightforward. But, how do you convince decision-makers to change the entrenched “fix-it-later” mindset?

The key is to shift the conversation about achievement away from product delivery toward the value created. For example, teams that take the time to revise a design based on user research are likely to see better customer reactions and, over time, increased customer loyalty.

Strengthen the case by using quantitative data to show decision-makers the direct connection between user research and increased revenue and a positive customer experience.

Use data to connect research and design improvements to specific business goals

Use data to connect research and design improvements to specific business goals (Large preview)

Re-defining value is, in effect, a process improvement because it establishes a new set of priorities that better serve customers and your company’s long-term interests. As McKinsey reports in The Business Value of Design: “Top-quartile companies embrace the full user experience; they break down internal barriers among physical, digital, and service design.”

3. Insufficient Time For Design Iterations

Related to the next-release syndrome is insufficient time to iterate the design based on research findings or changing business requirements. “We don’t have time for that,” is the common refrain from developers and product owners. Designers working in Agile environments are frequently pressured to avoid “holding up” the development team.

Development speeds along, and the software is released. We’ve all seen the results from confusing phone apps, to clunky medical records software, to the cumbersome user interface for financial advisers referenced above.

Solution: Design Spikes

One solution comes from the coding world. In his article “Fitting Big-Picture UX Into Agile Development”, Damon Dimmick offers the idea of design spikes, “bubbles of time that allow designers to focus on complex UX issues.” They fit into the Scrum framework by temporarily taking the place of a regular sprint.

Design iteration

Design iteration (Large preview)

Design spikes offer several advantages:

  • They allow UX teams to focus on holistic issues and avoid getting bogged down in granular design issues that are sometimes emphasized within a single sprint;
  • They offer the opportunity to explore complex UX questions from a high level. If needed, the UX design team can also engage in design-centric thinking at any point in order to solve larger UX challenges;
  • By adopting design spikes, UX teams can leverage the same flexibility that development teams use in the agile process and devote the time needed to focus on design issues that don’t always fit well into a standard scrum sprint;
  • Development unlikely to be affected by design decisions can proceed.

Naturally, design iterations often affect certain parts of the code for a site, app, or software product. For this reason, during design spikes any code that will likely be affected by the design spike cannot move forward. But, as Dimmick clearly states, this “delay” will likely save time by avoiding re-work. It simply does not make sense to write code now and then re-write it a few weeks later after the team has agreed on a revised design. In short, postponing some coding actually saves time and budget.

4. Relying Too Heavily On Vendors

Addressing complexity, resisting next-release syndrome, and allowing time for iteration are essential to an effective design process. For many firms, another consideration is their relationship with software vendors. These vendors play an important, even critical, role in development. Yet, granting them too much leverage makes it difficult to control your own product.

Outsourcing to software vendors

Outsourcing to software vendors (Large preview)

Certainly, we should treat colleagues and vendors with respect and make reasonable requests. That doesn’t mean, however, that it’s necessary to surrender all leverage as happened during my tenure at a large finance firm.

While working at this firm as a UX designer, I frequently encountered this dynamic:

Manager: “Hey, Eric can you evaluate this claims software that we’re planning to buy? We just want to make sure it works as intended.”

Me: “Sure, I’ll send you my preliminary findings by the end of the week.”

Manager: “Great”

The following week:

Manager: “Thanks for the review. I see that you found three serious issues: Hard to find the number for an existing claim, screens with too much text that are hard to read, and the difficulty of returning to a previous screen when processing a new claim. That is concerning. Do you think those issues will hinder productivity?”

Me: “Yes, I think these issues will increase stress and processing time in the Claims Center. I’m quite concerned because my previous work with Janet’s team demonstrated that the Claims Center reps are already highly stressed.”

Manager: “Really good to know. I just sent the check. I’ll ask the vendor to fix the problems before they ship.”

Me (screaming inside): “Noooooooooooooo!”

This well-intentioned manager did precisely the wrong thing. He asked for changes after sending the check. No surprise that the vendor never made the requested changes. Why would they? They had their money.

Not only did this scenario play out repeatedly at that company, but I’ve witnessed it throughout my UX career.

Solution

The solution is clear. If the vendor product does not meet customer and business needs, and the changes you request are within scope, don’t pay until the vendor makes the changes. It really is that simple.

Conclusion

In this piece, we’ve identified four common barriers to quality design and corresponding solutions:

  1. Complex regulations and standards
    The solution is to acknowledge and address complexity by devising realistic timelines and sufficient budget for research and iterative design.
  2. Shipping software with bugs with a promise to fix them later
    The solution is to avoid next-release syndrome and address serious problems now. Persuade decision-makers by re-defining the meaning of value within your organization.
  3. Insufficient time for design iterations
    The solution is to include design spikes in the agile development process. These bubbles of time temporarily take the place of a sprint and allow designers to focus on complex UX issues.
  4. Relying too heavily on vendors
    The solution is to retain leverage by withholding final payment until the vendor makes requested changes as long as these changes are within the original project scope.

The fourth solution is straightforward. While the first three are not easy, they are concrete because they can be applied directly to existing design processes. Their implementation does not require a massive reorganization or millions of dollars. It simply requires commitment to delivering a better experience.

Smashing Editorial
(ah, il)

Source: Smashing Magazine, Bringing A Better Design Process To Your Organization

How To Build A Real-Time Multiplayer Virtual Reality Game (Part 1)

dreamt up by webguru in Uncategorized | Comments Off on How To Build A Real-Time Multiplayer Virtual Reality Game (Part 1)

How To Build A Real-Time Multiplayer Virtual Reality Game (Part 1)

How To Build A Real-Time Multiplayer Virtual Reality Game (Part 1)

Alvin Wan



In this tutorial series, we will build a web-based multiplayer virtual reality game in which players will need to collaborate to solve a puzzle. We will use A-Frame for VR modeling, MirrorVR for cross-device real-time synchronization, and A-Frame Low Poly for low-poly aesthetics. At the end of this tutorial, you will have a fully functioning demo online that anyone can play.

Each pair of players is given a ring of orbs. The goal is to “turn on” all orbs, where an orb is “on” if it’s elevated and bright. An orb is “off” if it’s lower and dim. However, certain “dominant” orbs affect their neighbors: if it switches state, its neighbors also switch state. Only player 2 can control the dominant orbs while only player 1 can control non-dominant orbs. This forces both players to collaborate to solve the puzzle. In this first part of the tutorial, we will build the environment and add the design elements for our VR game.

The seven steps in this tutorial are grouped into three sections:

  1. Setting Up The Scene (Steps 1–2)
  2. Creating The Orbs (Steps 3–5)
  3. Making The Orbs Interactive (Steps 6–7)

This first part will conclude with a clickable orb that turns on and off (as pictured below). You will use A-Frame VR and several A-Frame extensions.

(Large preview)

Setting Up The Scene

1. Let’s Go With A Basic Scene

To get started, let’s take a look at how we can set up a simple scene with a ground:

Creating a simple scene

Creating a simple scene (Large preview)

The first three instructions below are excerpted from my previous article. You will start by setting up a website with a single static HTML page. This allows you to code from your desktop and automatically deploy to the web. The deployed website can then be loaded on your mobile phone and placed inside a VR headset. Alternatively, the deployed website can be loaded by a standalone VR headset.

Get started by navigating to glitch.com. Then, do the following:

  1. Click on “New Project” in the top right,
  2. Click on “hello-webpage” in the drop-down,
  3. Next, click on index.html in the left sidebar. We will refer to this as your “editor”.

You should now see the following Glitch screen with a default HTML file.

Glitch project: the index.html file

Glitch project: the index.html file (Large preview)

As with the linked tutorial above, start by deleting all existing code in the current index.html file. Then, type in the following for a basic webVR project, using A-Frame VR. This creates an empty scene by using A-Frame’s default lighting and camera.

<!DOCTYPE html>
<html>
  <head>
    <title>Lightful</title>
    https://aframe.io/releases/0.8.0/aframe.min.js
  </head>
  <body>
    <a-scene>
    </a-scene>
  </body>
</html>

Raise the camera to standing height. Per A-Frame VR recommendations (Github issue), wrap the camera with a new entity and move the parent entity instead of the camera directly. Between your a-scene tags on lines 8 and 9, add the following.

<!-- Camera! -->
<a-entity id="rig" position="0 3 0">
  <a-camera wasd-controls look-controls></a-camera>
</a-entity>

Next, add a large box to denote the ground, using a-box. Place this directly beneath your camera from the previous instruction.

<!-- Action! -->
<a-box shadow width="75" height="0.1" depth="75" position="0 -1 0" color="#222"></a-box>

Your index.html file should now match the following exactly. You can find the full source code here, on Github.

<html>
  <head>
    <title>Lightful</title>
    https://aframe.io/releases/0.8.0/aframe.min.js
  </head>
  <body>
    <a-scene>
      <!-- Camera! -->
      <a-entity id="rig" position="0 3 0">
        <a-camera wasd-controls look-controls></a-camera>
      </a-entity>

      <!-- Action! -->
      <a-box shadow width="75" height="0.1" depth="75" position="0 -1 0" color="#222"></a-box>
    </a-scene>
  </body>
</html>

This concludes setup. Next, we will customize lighting for a more mysterious atmosphere.

2. Add Atmosphere

In this step, we will set up the fog and custom lighting.

A preview of a simple scene with a dark mood

A preview of a simple scene with a dark mood (Large preview)

Add a fog, which will obscure objects far away for us. Modify the a-scene tag on line 8. Here, we will add a dark fog that quickly obscures the edges of the ground, giving the effect of a distant horizon.

<a-scene fog="type: linear; color: #111; near:10; far:15"></a-scene>

The dark gray #111 fades in linearly from a distance of 10 to a distance of 15. All objects more than 15 units away are completely obscured, and all objects fewer than 10 units away are completely visible. Any object in between is partially obscured.

Add one ambient light to lighten in-game objects and one-directional light to accentuate reflective surfaces you will add later. Place this directly after the a-scene tag you modified in the previous instruction.

<!-- Lights! -->
<a-light type="directional" castshadow="true" intensity="0.5" color="#FFF" position="2 5 0"></a-light>
<a-light intensity="0.1" type="ambient" position="1 1 1" color="#FFF"></a-light>

Directly beneath the lights in the previous instruction, add a dark sky. Notice the dark gray #111 matches that of the distant fog.

<a-sky color="#111"></a-sky>

This concludes basic modifications to the mood and more broadly, scene setup. Check that your code matches the source code for Step 2 on Github, exactly. Next, we will add a low-poly orb and begin customizing the orb’s aesthetics.

Creating The Orbs

3. Create A Low-Poly Orb

In this step, we will create a rotating, reflective orb as pictured below. The orb is composed of two stylized low-poly spheres with a few tricks to suggest reflective material.

Rotating, reflective orb
(Large preview)

Start by importing the low-poly library in your head tag. Insert the following between lines 4 and 5.

Create a carousel, wrapper, and orb container. The carousel will contain multiple orbs, the wrapper will allow us to rotate all orbs around a center axis without rotating each orb individually, and the container will — as the name suggests — contain all orb components.

<a-entity id="carousel">
  <a-entity rotation="0 90 0" id="template" class="wrapper" position="0 0 0">
    <a-entity id="container-orb0" class="container" position="8 3 0" scale="1 1 1">
      <!-- place orb here -->
    </a-entity>
  </a-entity>
</a-entity>

Inside the orb container, add the orb itself: one sphere is slightly translucent and offset, and the other is completely solid. The two combined mimic reflective surfaces.

<a-entity class="orb" id="orb0" data-id="0">
  <lp-sphere seed="0" shadow max-amplitude="1 1 1" position="-0.5 0 -0.5"></lp-sphere>
  <lp-sphere seed="0" shadow max-amplitude="1 1 1" rotation="0 45 45" opacity="0.5" position="-0.5 0 -0.5"></lp-sphere>
</a-entity>

Finally, rotate the sphere indefinitely by adding the following a-animation tag immediately after the lp-sphere inside the .orb entity in the last instruction.

<a-animation attribute="rotation" repeat="indefinite" from="0 0 0" to="0 360 0" dur="5000"></a-animation>

Your source code for the orb wrappers and the orb itself should match the following exactly.

<a-entity id="carousel">
  <a-entity rotation="0 90 0" id="template" class="wrapper" position="0 0 0">
    <a-entity id="container-orb0" class="container" position="8 3 0" scale="1 1 1">
      <a-entity class="orb" id="orb0" data-id="0">
        <lp-sphere seed="0" shadow max-amplitude="1 1 1" position="-0.5 0 -0.5"></lp-sphere>
        <lp-sphere seed="0" shadow max-amplitude="1 1 1" rotation="0 45 45" opacity="0.5" position="-0.5 0 -0.5"></lp-sphere>
        <a-animation attribute="rotation" repeat="indefinite" from="0 0 0" to="0 360 0" dur="5000"></a-animation>
      </a-entity>
    </a-entity>
  </a-entity>
</a-entity>

Check that your source code matches the full source code for step 3 on Github. Your preview should now match the following.

Rotating, reflective orb
(Large preview)

Next, we will add more lighting to the orb for a golden hue.

4. Light Up The Orb

In this step, we will add two lights, one colored and one white. This produces the following effect.

Orb lit with point lights
(Large preview)

Start by adding the white light to illuminate the object from below. We will use a point light. Directly before #orb0 but within #container-orb0, add the following offset point light.

<a-entity position="-2 -1 0">
    <a-light distance="8" type="point" color="#FFF" intensity="0.8"></a-light>
</a-entity>

In your preview, you will see the following.

Orb lit with white point light
(Large preview)

By default, lights do not decay with distance. By adding distance="8", we ensure that the light fully decays with a distance of 8 units, to prevent the point light from illuminating the entire scene. Next, add the golden light. Add the following directly above the last light.

<a-light class="light-orb" id="light-orb0" distance="8" type="point" color="#f90" intensity="1"></a-light>

Check that your code matches the source code for step 4 exactly. Your preview will now match the following.

Orb lit with point lights
(Large preview)

Next, you will make your final aesthetic modification to the orb and add rotating rings.

5. Add Rings

In this step, you will produce the final orb, as pictured below.

Golden orb with multiple rings
(Large preview)

Add a ring in #container-orb0 directly before #orb0.

<a-ring color="#fff" material="side:double" position="0 0.5 0" radius-inner="1.9" radius-outer="2" opacity="0.25"></a-ring>

Notice the ring itself does not contain color, as the color will be imbued by the point light in the previous step. Furthermore, the material="side:double" is important as, without it, the ring’s backside would not be rendered; this means the ring would disappear for half of its rotation.

However, the preview with only the above code will not look any different. This is because the ring is currently perpendicular to the screen. Thus, only the ring’s “side” (which has 0 thickness) is visible. Place the following animation in between the a-ring tags in the previous instruction.

<a-animation attribute="rotation" easing="linear" repeat="indefinite" from="0 0 0" to="0 360 0" dur="8000"></a-animation>

Your preview should now match the following:

Golden orb with ring
(Large preview)

Create a variable number of rings with different rotation axes, speeds, and sizes. You can use the following example rings. Any new rings should be placed underneath the last a-ring.

<a-ring color="#fff" material="side:double" position="0 0.5 0" radius-inner="2.4" radius-outer="2.5" opacity="0.25">
  <a-animation attribute="rotation" easing="linear" repeat="indefinite" from="0 45 0" to="360 45 0" dur="8000"></a-animation>
</a-ring>
<a-ring color="#fff" material="side:double" position="0 0.5 0" radius-inner="1.4" radius-outer="1.5" opacity="0.25">
  <a-animation attribute="rotation" easing="linear" repeat="indefinite" from="0 -60 0" to="-360 -60 0" dur="3000"></a-animation>
</a-ring>

Your preview will now match the following.

Golden orb with multiple rings
(Large preview)

Check that your code matches the source code for step 5 on Github. This concludes decor for the orb. With the orb finished, we will next add interactivity to the orb. In the next step, we will specifically add a visible cursor with a clicking animation when pointed at clickable objects.

Making The Orbs Interactive

6. Add A Cursor

In this step, we will add a white cursor that can trigger clickable objects. The cursor is pictured below.

clicking on orb
(Large preview)

In your a-camera tag, add the following entity. The fuse attribute allows this entity the ability to trigger click events. The raycaster attribute determines how often and how far to check for clickable objects. The objects attribute accepts a selector to determine which entities are clickable. In this case, all objects of class clickable are clickable.

<a-entity cursor="fuse: true; fuseTimeout: 250"
      position="0 0 -1"
      geometry="primitive: ring; radiusInner: 0.03; radiusOuter: 0.04"
      material="color: white; shader: flat; opacity: 0.5"
      scale="0.5 0.5 0.5"
      raycaster="far: 20; interval: 1000; objects: .clickable">
    <!-- Place cursor animation here -->
</a-entity>

Next, add cursor animation and an extra ring for aesthetics. Place the following inside the entity cursor object above. This adds animation to the cursor object so that clicks are visible.

<a-circle radius="0.01" color="#FFF" opacity="0.5" material="shader: flat"></a-circle>
<a-animation begin="fusing" easing="ease-in" attribute="scale"
   fill="backwards" from="1 1 1" to="0.2 0.2 0.2" dur="250"></a-animation>

Next, add the clickable class to the #orb0 to match the following.

<a-entity class="orb clickable" id="orb0" data-id="0">

Check that your code matches the source code for Step 6 on Github. In your preview, drag your cursor off of them onto the orb to see the click animation in action. This is pictured below.

clicking on orb
(Large preview)

Note the clickable attribute was added to the orb itself and not the orb container. This is to prevent the rings from becoming clickable objects. This way, the user must click on the spheres that make up the orb itself.

In our final step for this part, you will add animation to control the on and off states for the orb.

7. Add Orb States

In this step, you will animate the orb in and out of an off state on click. This is pictured below.

Interactive orb responding to clicks
(Large preview)

To start, you will shrink and lower the orb to the ground. Add a-animation tags to the #container-orb0 right after #orb0. Both animations are triggered by a click and share the same easing function ease-elastic for a slight bounce.

<a-animation class="animation-scale" easing="ease-elastic" begin="click" attribute="scale" from="0.5 0.5 0.5" to="1 1 1" direction="alternate" dur="2000"></a-animation>
<a-animation class="animation-position" easing="ease-elastic" begin="click" attribute="position" from="8 0.5 0" to="8 3 0" direction="alternate" dur="2000"></a-animation>

To further emphasize the off state, we will remove the golden point light when the orb is off. However, the orb’s lights are placed outside of the orb object. Thus, the click event is not passed to the lights when the orb is clicked. To circumvent this issue, we will use some light Javascript to pass the click event to the light. Place the following animation tag in #light-orb0. The light is triggered by a custom switch event.

<a-animation class="animation-intensity" begin="switch" attribute="intensity" from="0" to="1" direction="alternate"></a-animation>

Next, add the following click event listener to the #container-orb0. This will relay the clicks to the orb lights.

<a-entity id="container-orb0" ... onclick="document.querySelector('#light-orb0').emit('switch');">

Check that your code matches the source code for Step 7 on Github. Finally, pull up your preview, and move the cursor on and off the orb to toggle between off and on states. This is pictured below.

Interactive orb responding to clicks
(Large preview)

This concludes the orb’s interactivity. The player can now turn orbs on and off at will, with self-explanatory on and off states.

Conclusion

In this tutorial, you built a simple orb with on and off states, which can be toggled by a VR-headset-friendly cursor click. With a number of different lighting techniques and animations, you were able to distinguish between the two states. This concludes the virtual reality design elements for the orbs. In the next part of the tutorial, we will populate the orbs dynamically, add game mechanics, and set up a communication protocol between a pair of players.

Smashing Editorial
(rb, dm, il)

Source: Smashing Magazine, How To Build A Real-Time Multiplayer Virtual Reality Game (Part 1)

Collective #541

dreamt up by webguru in Uncategorized | Comments Off on Collective #541

C541_waves

Get Waves

A free SVG wave generator to make unique SVG waves. Choose a curve, adjust complexity and randomize.

Check it out




C541_storytime

StoryTime

StoryTime enables developers to easily simulate debugger-like visuals to tell or read a story about pieces of code.

Check it out







C541_divjoy

Divjoy

A React codebase generator where you can select the stack you want, pick a template and then export a complete React codebase. By Gabe Ragland.

Check it out




C541_inputdelay

Input delay

Monica Dinculescu created this experiment to see what amount of delay is too annoying for a user interaction like typing.

Check it out





C541_ambient

Day/Night Ambient Light Animation

Mandy Michael shares a demo that shows how to change the page content based on the light level in the room using the Ambient Light API. Works behind the #enable-generic-sensor-extra-classes flag in chrome://flags.

Check it out




C541_mcjs

MC.JS

An open source Minecraft clone built with ThreeJS, ReactJS, GraphQL, and NodeJS.

Check it out



C541_head

HeadBanger

An AI powered interactive experiment where you can release fireballs from your mouth and head bang to break barriers.

Check it out


C541_martini

MARTINI

Vladimir Agafonkin introduces MARTINI, an open source, client-side terrain mesh generation library.

Check it out




Collective #541 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #541

Pitching Your Writing To Publications

dreamt up by webguru in Uncategorized | Comments Off on Pitching Your Writing To Publications

Pitching Your Writing To Publications

Pitching Your Writing To Publications

Rachel Andrew



Recently, I had a chat with Chris Coyier and Dave Rupert over on the Shoptalk Podcast about writing for publications such as Smashing Magazine and CSS-Tricks. One of the things we talked about was submitting ideas to publications — something that can feel quite daunting even as an experienced writer.

In this article, I’m going to go through the process for pitching, heavily based on my own experience as a writer and as Editor in Chief of Smashing. However, I’ve also taken a look at the guidelines for other publications in order to help you find the right places to pitch your article ideas.

Do Your Research

Read existing articles on the site that you would like to write for. Who do they seem to be aimed at? What tone of voice do the writers take? Does the publication tend to publish news pieces, opinion, or how-to tutorials? Check to see if there are already other pieces which are on the same subject as your idea, i.e. will your piece add to the conversation already started by those articles? If you can show that you are aware of existing content on a particular subject, and explain how you will reference it or add to that information, the editor will know you have done some research.

Research more widely; are there already good pieces on the subject that an editor will consider your piece to be a repeat of? There is always space for a new take on an issue, but in general, publications want fresh material. You should be ready to explain how your piece will reference this earlier work and build upon it, or introduce the subject to a new audience.

A good example from our own archives is the piece, “Replacing jQuery With Vue.js”. There are a lot of introductions to Vue.js, however, this piece was squarely aimed at the web developer who knows jQuery. It introduced the subject in a familiar way specifically for the target audience.

Find The Submission Guide

The next thing to do is to find the submission information on the site you want to write for. Most publications will have information about who to contact and what information to include. From my point of view, simply following that information and demonstrating you have done some research puts you pretty high up the queue to be taken seriously. At Smashing Magazine, we have a link to the guide to writing for us right there on the contact form. I’d estimate that only 20% of people read and follow those instructions.

Screenshot of the Smashing Contact Form
The link to our submission guide on our Contact Us page.

When you submit your idea, it is up to you to sell it to the publication. Why should I run with your idea over the many others that will show up today? Spending time over your submissions will make a huge difference in how many pieces you have accepted.

Different publications have different requirements. At Smashing Magazine, we ask you to send an outline first, along with some information about you so that we can understand your expertise in the subject matter. We’re very keen to feature new voices, and so we’ll accept pieces from writers who haven’t got a huge string of writing credentials.

The information we request helps us to decide if you are likely to be able to deliver a coherent piece. As our articles are technical in nature (often tutorials), I find that an outline is the best way to quickly see the shape of the proposal and the scope it will cover. A good outline will include the main headings or sections of the article, along with an explanation of what will be taught in that section.

For many other publications, a common request is for you to send a pitch for the article. This would typically be a couple of paragraphs explaining the direction your piece will take. Once again, check the submission guide for any specific details that publication is interested to see.

The Verge has an excellent submission guide which explains exactly what they want to see in a pitch:

“A good pitch contains a story, a narrative backbone. Pitches should clearly and concisely convey the story you plan to write and why it matters. The best pitches display promising pre-reporting and deep knowledge of the topic as well as a sense of the angle or insight you plan to pursue. If your story depends on access to a person or company, you should say whether you have obtained it already (and if not, what your prospects are). Pitches should also be written in the style you expect to write the story.”

— “How To Pitch The Verge,” The Verge

A List Apart explains what they will accept in their contribution page:

“… a rough draft, a partial draft, or a short pitch (a paragraph or two summarizing your argument and why it matters to our readers) paired with an outline. The more complete your submission is, the better feedback we can give you.”

— “Write For Us,” A List Apart

The Slate has a list of Do’s and Don’ts for pitching:

“Do distill your idea into a pitch, even if you have a full draft already written. If you happen to have a draft ready, feel free to attach it, but please make sure you still include a full pitch describing the piece in the body of the email.”

— “How To Pitch Slate,” The Slate

Including your pitch or outline in the body of the email is a common theme of pitch guidelines. Remember that your aim is to make it as easy as possible for the editor to think, “that looks interesting”.

Include A Short Biography

The editor doesn’t need your life story, however, a couple of sentences about you is helpful. This is especially useful if you are a newer writer who has subject matter expertise but fewer writing credentials. If you are proposing an article to me about moving a site from WordPress to Gatsby, and tell me that the article is based on your experience of doing this on a large site, that is more interesting to me than a more experienced writer who has just thought it would be a good topic to write about.

If you do have writing credits, a few relevant links are more helpful than a link to your entire portfolio.

When You Can’t Find A Submission Guide

Some publications will publish an email address or contact form for submissions, but have no obvious guide. In that case, assume that a short pitch as described above is appropriate. Include the pitch in the body of the email rather than an attachment, and make sure you include contact details in addition to your email address.

If you can’t find any information about submitting, then check to see if the publication is actually accepting external posts. Are all the articles written by staff? If unsure, then get in touch via a published contact method and ask if they accept pitches.

I’ve Already Written My Article, Why Should I Send An Outline Or Pitch?

We ask for an outline for a few reasons. Firstly, we’re a very small team. Each proposal is assessed by me, and I don’t have time in the day to read numerous 3000-word first draft proposals. In addition, we often have around 100 articles in the writing process at any one time. It’s quite likely that two authors will want to write on the same subject.

On receiving an outline, if it is going in a similar direction to something we already have in the pipeline, I can often spot something that would add to — rather than repeat — the other piece. We can then guide you towards that direction, and be able to accept the proposal where a completed piece may have been rejected as too similar.

If you are a new writer, the ability to structure an outline tells me a lot about your ability to deliver us something useful. We are going to spend time and energy working with you on your article, and I want to know it will be worthwhile for all of us.

If you are an experienced writer, the fact that you have read and worked with our guidelines tells me a lot about you as a professional. Are you going to be difficult for our editorial team to work with and refuse to make requested changes? Or are you keen to work with us to shape a piece that will be most useful and practical for the audience?

In The Verge submission guide included above, they ask you to “clearly and concisely” convey the story you plan to write. Your pitch shouldn’t be an article with bits removed or about the first two paragraphs. It’s literally a sales pitch for your proposed article; your job is to make the editor excited to read your full proposal! Some publications — in particular those that publish timely pieces on news topics — will ask you to attach your draft along with the pitch, however, you still need to get the editor to think it is worth opening that document.

Promoting Yourself Or Your Business

In many guides to self-promotion or bootstrapping the promotion of a startup, writing guest posts is something that will often be suggested. Be aware that the majority of publications are not going to publish an advert and pay you for the privilege.

Writing an article that refers to your product may be appropriate, as most of our expertise comes from doing the job that we do. It is worth being upfront when proposing a piece that would need to mention your product or the product of the company you work for. Explain how your idea will not be an advert for the company and that the product will only be mentioned in the context of the experience gained in your work.

Some publications will accept a trade of an article for some promotion. CSS-Tricks is one such publication, and describes what they are looking for as follows:

“The article is intended to promote something. In that case, no money changes hands. In this scenario, your pitch must be different from a sponsored post in that you aren’t just straight up pitching your product or service and that you’re writing a useful article about the web; it just so happens to be something that the promotion you’ll get from this article is valuable to you.”

— “Guest Posting,” CSS-Tricks

Writing for a popular publication will give you a byline, i.e. your credit as an author. That will generally give you at least one link to your own site. Writing well-received articles can be a way to build up your reputation and even introduce people to your products and services, but if you try and slide an advert in as an article, you can be sure that editors are very well used to spotting that!

Pitching The Same Idea To Multiple Publications

For time-sensitive pieces, you might be keen to spread the net. In that case, you should make publications aware of submitting that you have submitted it elsewhere. Otherwise, it is generally good practice to wait for a response before offering the piece to another publication. The Slate writes,

“Do be mindful if you pitch your idea to multiple publications. We try to reply to everyone in a timely manner, typically within one to two days. As a general rule, and if the story isn’t too timely, it’s best to wait that amount of time before sharing the pitch with another publication. If you do decide to cast a wide net, it’s always helpful to let us know ahead of time so we can respond accordingly.”

— “How To Pitch Slate,” The Slate

If Your Pitch Is Rejected

You will have ideas rejected. Sometimes, the editor will let you know why, but most often you’ll get a quick no, thanks. Try not to take these to heart; there are many reasons why the piece might be rejected that have nothing to do with the article idea or the quality of your proposal.

The main reasons I reject pitches are as follows:

  1. Obvious Spam
    This is the easy one. People wanting to publish a “guest post” on vague subjects, and people wanting “do-follow links”. We don’t tend to reply to these as they are essentially spam.
  2. No Attempt At A Serious Outline
    I can’t tell anything about an idea from two sentences or three bullet points, and if the author can’t spend the time to write an outline, I don’t think I want to have a team member working with them.
  3. Not A Good Topic For Us
    There are some outlines that I can’t ever see being a great fit for our readers.
  4. An Attempt To Hide An Advert
    In this case, I’ll suggest that you talk to our advertising team!
  5. Difficult To Work With
    Last but not least, authors who have behaved so badly during the pitch process that I can’t bring myself to inflict them on anyone else. Don’t be that person!

If I have a decent outline on a relevant subject in front of me, then one of two things are going to happen: I’ll accept the outline and get the author into the writing process or I’ll reply to the author because there is some reason why we can’t accept the outline as it is. That will usually be because the target audience or tone is wrong, or we already have a very similar piece in development.

Quite often in these scenarios, I will suggest changes or a different approach. Many of those initial soft rejections become an accepted idea, or the author comes back with a different idea that does indeed work.

Ultimately, those of us who need to fill a publication with content really want you to bring us good ideas. To open my inbox and find interesting pitches for Smashing is a genuine highlight of my day. So please do write for us.

Things To Do

  • Research the publication, and the type of articles they publish;
  • Read their submissions guide, and follow it;
  • Be upfront if you have sent the pitch to other publications;
  • Include a couple of sentences about you, and why you are the person to write the article. Link to some other relevant writing if you have it;
  • Be polite and friendly, but concise.

Things To Avoid

  • Sending a complete draft along with the words, “How do I publish this on your site?”;
  • Sending things in a format other than suggested in the submissions guide;
  • Pitching a piece that is already published somewhere else;
  • Pitching a hidden advert for your product or services;
  • Following up aggressively, or sending the pitch to multiple editors, Facebook messenger, and Twitter, in an attempt to get noticed. We publish a pitch contact, because we want pitches. It might take a day or two to follow up though!

More Pitching Tips

Smashing Editorial
(il)

Source: Smashing Magazine, Pitching Your Writing To Publications

Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking

Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking

Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking

Anselm Hannemann



What’s more powerful than a star who knows everything? Well, a team not made of stars but of people who love what they do, stand behind their company’s vision and can work together, support each other. Like a galaxy made of stars — where not every star shines and also doesn’t need to. Everyone has their place, their own strength, their own weakness. Teams don’t consist only of stars, they consist of people, and the most important thing is that the work and life culture is great. So don’t do a moonshot if you’re hiring someone but try to look for someone who fits into your team and encourages, supports your team’s values and members.

In terms of your own life, take some time today to take a deep breath and recall what happened this week. Go through it day by day and appreciate the actions, the negative ones as well as the positive ones. Accept that negative things happen in our lives as well, otherwise we wouldn’t be able to feel good either. It’s a helpful exercise to balance your life, to have a way of invalidating the feeling of “I did nothing this week” or “I was quite unproductive.” It makes you understand why you might not have worked as much as you’re used to — but it feels fine because there’s a reason for it.

News

  • Three weeks ago we officially exhausted the Earth’s natural resources for the year — with four months left in 2019. Earth Overshoot Day is a good indicator of where we’re currently at in the fight against climate change and it’s a great initiative by people who try to give helpful advice on how we can move that date so one day in the (hopefully) near future we’ll reach overshoot day not before the end of the year or even in a new year.
  • Chrome 76 brings the prefers-color-scheme media query (e.g. for dark mode support) and multiple simplifications for PWA installation.

UI/UX

JavaScript

Web Performance

  • Some experiments sound silly but in reality, they’re not: Chris Ashton used the web for a day on a 50MB budget. In Zimbabwe, for example, where 1 GB costs an average of $75.20, ranging from $12.50 to $138.46, 50MB is incredibly expensive. So reducing your app bundle size, image size, and website cost are directly related to how happy your users are when they browse your site or use your service. If it costs them $3.76 (50MB) to access your new sports shoe teaser page, it’s unlikely that they will buy or recommend it.
  • BBC’s Toby Cox shares how they ditched iframes in favor of ShadowDOM to improve their site performance significantly. This is a good piece explaining the advantages and drawbacks of iframes and why adopting ShadowDOM takes time and still feels uncomfortable for most of us.
  • Craig Mod shares why people prefer to choose (and pay for) fast software. People are grateful for it and are easily annoyed if the app takes too much time to start or shows a laggy user interface.
  • Harry Roberts explains the details of the “time to first byte” metric and why it matters.

CSS

HTML & SVG

  • With Chrome 76 we get the loading attribute which allows for native lazy loading of images just with HTML. It’s great to have a handy article that explains how to use, debug, and test it on your website today.

Lazy loading images of cats

No more custom lazy-loading code or a separate JavaScript library needed: Chrome 76 comes with native lazy loading built in. (Image credit)

Accessibility

Security

  • Here’s a technical analysis of the Capital One hack. A good read for anyone who uses Cloud providers like AWS for their systems because it all comes down to configuring accounts correctly to prevent hackers from gaining access due to a misconfigured cloud service user role.

Privacy

Work & Life

  • For a long time I believed that a strong team is made of stars — extraordinary world-class individuals who can generate and execute ideas at a level no one else can. These days, I feel that a strong team is the one that feels more like a close family than a constellation of stars. A family where everybody has a sense of predictability, trust and respect for each other. A family which deeply embodies the values the company carries and reflects these values throughout their work. But also a family where everybody feels genuinely valued, happy and ignited to create,” said Vitaly Friedman in an update thought recently and I couldn’t agree more.
  • How do you justify a job in a company that has a significant influence on our world and our everyday lives and that not necessarily with the best intentions? Meredith Whittaker wrote up her story of starting at Google, having an amazing time there, and now leaving the company because she couldn’t justify it anymore that Google is using her work and technology to get involved in fossil energy business, healthcare, governance, and transportation business — and not always with the focus on improving everyone’s lives or making our environment a better place to live in but simply for profit.
  • Synchronous meetings are a problem in nearly every company. They take a lot of time from a lot of people and disrupt any schedule or focused work. So here’s how Buffer switched to asynchronous meetings, including great tips and insights into why many tools out there don’t work well.
  • Actionable advice is what we usually look for when reading an article. However, it’s not always possible or the best option to write actionable advice and certainly not always a good idea to follow actionable advice blindly. That’s because most of the time actionable advice also is opinionated, tailored, customized advice that doesn’t necessarily fit your purpose. Sharing experiences instead of actionable advice fosters creativity so everyone can find their own solution, their own advice.
  • Sam Clulow’s “Our Planet, Our Problem” is a great piece of writing that reminds us of who we are and what’s important for us and how we can live in a city and switch to a better, more thoughtful and natural life.
  • Climate change is a topic all around the world now and it seems that many people are concerned about it and want to take action. But then, last month we had the busiest air travel day ever in history. Airplanes are accountable for one of the biggest parts of climate active emissions, so it’s key to reduce air travel as much as possible from today on. Coincidentally, this was also the hottest week measured in Europe ever. We as individuals need to finally cut down on flights, regardless of how tempting that next $50-holiday-flight to a nice destination might be, regardless of if it’s an important business meeting. What do we have video conferencing solutions for? Why do people claim to work remotely if they then fly around the world dozens of times in their life? There are so many nice destinations nearby, reachable by train or, if needed, by car.

Update from a team member of what happened during the week and what he’s working on

The team at Buffer shares what worked and what didn’t work for them when they switched to asynchronous meetings. (Image credit)

Going Beyond…

  • Leo Babauta shares a tip on how to stop overthinking by cutting through indecision. We will never have the certainty we’d like to have in our lives so it’s quite good to have a strategy for dealing with uncertainty. As I’m struggling with this a lot, I found the article helpful.
  • The ethical practices that can serve as a code of conduct for data sensemaking professionals are built upon a single fundamental principle. It is the same principle that medical doctors swear as an oath before becoming licensed: Do no harm. Here’s “Ethical Data Sensemaking.”
  • Paul Hayes shares his experience from trying to live plastic-free for a month and why it’s hard to stick to it. It’s surprising how shopping habits need to be changed and why you need to spend your money in a totally different way and cannot rely on online stores anymore.
  • Oil powers the cars we drive and the flights we take, it heats many of our homes and offices. It is in the things we use every day and it plays an integral role across industries and economies. Yet it has become very clear that the relentless burning of fossil fuels cannot continue unabated. Can the world be less reliant on oil?
  • Uber and Lyft admit that they’re making traffic congestion worse in cities. Next time you use any of those new taxi apps, try to remind yourself that you’re making the situation worse for many people in the city.

Thank you for reading. If you like what I write, please consider supporting the Web Development Reading List.

—Anselm

Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 8/2019: Strong Teams And Ethical Data Sensemaking

The (Upcoming) WordPress Renaissance

dreamt up by webguru in Uncategorized | Comments Off on The (Upcoming) WordPress Renaissance

The (Upcoming) WordPress Renaissance

The (Upcoming) WordPress Renaissance

Leonardo Losoviz



It has been 8 months since Gutenberg was launched as the default content editor in WordPress. Depending who you ask, you may hear that Gutenberg is the worst or the best thing that has happened to WordPress (or anything in between). But something that most people seem to agree with, is that Gutenberg has been steadily improving. At the current pace of development, it’s only a matter of time until its most outstanding issues have been dealt with and the user experience becomes truly pleasant.

Gutenberg is an ongoing work in progress. While using it, I experience maddening nuisances, such as floating options that I can’t click on because the block placed below gets selected instead, unintuitive grouping of blocks, columns with so much gap that make them useless, and the “+” element calling for my attention all over the page. However, the problems I encounter are still relatively manageable (which is an improvement from the previous versions) and, moreover, Gutenberg has started making its potential benefits become a reality: Many of its most pressing bugs have been ironed out, its accessibility issues are being solved, and new and exciting features are continuously being made available. What we have so far is pretty decent, and it will only get better and better.

Let’s review the new developments which have taken place since Gutenberg’s launch, and where it is heading to.

Note: For more information about this topic, I recommend watching WordPress founder Matt Mullenweg’s talk during the recent WordCamp Europe 2019.

Why Gutenberg Was Needed

Gutenberg arrived just in time to kick-start the rejuvenation of WordPress, to attempt to make WordPress appealing to developers once again (and reverse its current status of being the most dreaded platform). WordPress had stopped looking attractive because of its focus on not breaking backwards compatibility, which prevented WordPress from incorporating modern code, making it look pale in comparison with newer, shinier frameworks.

Many people argue that WordPress was in no peril of dying (after all, it powers more than 1/3rd of the web), so that Gutenberg was not really needed, and they may be right. However, even if WordPress was in no immediate danger, by being disconnected from modern development trends it was headed towards obsolescence, possibly not in the short-term but certainly in the mid to long-term. Let’s review how Gutenberg improves the experience for different WordPress stakeholders: developers, website admins, and website users.

Developers have recently embraced building websites through JavaScript libraries Vue and React because (among other reasons) of the power and convenience of components, which translates into a satisfying developer-experience. By jumping into the bandwagon and adopting this technique, Gutenberg enables WordPress to attract developers once again, allowing them to code in a manner they find gratifying.

Website admins can manage their content more easily, improve their productivity, and achieve things that couldn’t be done before. For instance, placing a Youtube video through a block is easier than through the TinyMCE Textarea, blocks can serve optimal images (compressed, resized according to the device, converted to a different format, and so on) removing the need to do it manually, and the WYSIWYG (What You See Is What You Get) capabilities are decent enough to provide a real-time preview of how the content will look like in the website.

By giving them access to powerful functionality, website users will have a higher satisfaction when browsing our sites, as experienced when using highly-dynamic, user-friendly web applications such as Facebook or Twitter.

In addition, Gutenberg is slowly but surely modernizing the whole process of creating the website. While currently it can be used only as the content editor, some time in the future it will become a full-fledged site builder, allowing to place components (called blocks) anywhere on a page, including the header, footer, sidebar, etc. (Automattic, the company behind WordPress.com, has already started work on a plugin adding full site editing capabilities for its commercial site, from which it could be adapted for the open-source WordPress software.) Through the site-building feature, non-techy users will be able to add very powerful functionality to their sites very easily, so WordPress will keep welcoming the greater community of people working on the web (and not just developers).

Fast Pace Of Development

One of the reasons why Gutenberg has seen such a fast pace of development is because it is hosted on GitHub, which simplifies the management of code, issues and communication as compared to Trac (which handles WordPress core), and which makes it easy for first-time contributors to become involved since they may already have experience working with Git.

Being decoupled from WordPress core, Gutenberg can benefit from rapid iteration. Even though a new version of WordPress is released every 3 months or so, Gutenberg is also available as a standalone plugin, which sees a new release every two weeks (while the latest release of WordPress contains Gutenberg version 5.5, the latest plugin version is 6.2). Having access to powerful new functionality for our sites every two weeks is very impressive indeed, and it enables to unlock further functionality from the broader ecosystem (for instance, the AMP plugin requires Gutenberg 5.8+ for several features).

Headless WordPress To Power Multiple Stacks

One of the side effects of Gutenberg is that WordPress has increasingly become “headless”, further decoupling the rendering of the application from the management of the content. This is because Gutenberg is a front-end client that interacts with the WordPress back-end through APIs (the WP REST API), and the development of Gutenberg has demanded a consistent expansion of the available APIs. These APIs are not restricted to Gutenberg; they can be used together with any client-side framework, to render the site using any stack.

An example of a stack we can leverage for our WordPress application is the JAMstack, which champions an architecture based on static sites augmented through 3rd party services (APIs) to become dynamic (indeed, Smashing Magazine is a JAMstack site!). This way, we can host our content in WordPress (leveraging it as a Content Management System, which is what it is truly good at), build an application that accesses the content through APIs, generate a static site, and deploy it on a Content Delivery Network, providing for lower costs and greater access speed.

New Functionality

Let’s play with Gutenberg (the plugin, not the one included in WordPress core, which is available here) and see what functionality has been added in the last few months.

Block Manager

Through the block manager, we can decide what blocks will be available on the content editor; all others will be disabled. Removing access to unwanted blocks can be useful in several situations, such as:

  • Many plugins are bundles of blocks; when installing such a plugin, all their blocks will be added to the content editor, even if we need only one
  • As many as 40 embed providers are implemented in WordPress core, yet we may need just a few of them for the application, such as Vimeo and Youtube
  • Having a large amount of blocks available can overwhelm us, impairing our workflow by adding extra layers that the user needs to navigate, leading to suboptimal use of the time; hence, temporarily disabling unneeded blocks can help us be more effective
  • Similarly, having only the blocks we need avoids potential errors caused by using the wrong blocks; in particular, establishing which blocks are needed can be done in a top-down manner, with the website admin analyzing all available blocks and deciding which ones to use, and imposing the decision on the content managers, who are then relieved from this task and can concentrate on their own duties.

Block manager

Enabling/disabling blocks through the manager (Large preview)

Cover Block With Nesting Elements

The cover block (which allows us to add a title over a background image, generally useful for creating hero headers) now defines its inner elements (i.e. the heading and buttons, which can be added for creating a call to action) as nested elements, allowing us to modify its properties in a uniform way across blocks (for instance, we can make the heading bold and add a link to it, place one or more buttons and change their background color, and others).

Cover block

The cover block accepts nested elements (Large preview)

Block Grouping And Nesting

Please beware: These features are still buggy! However, plenty of time and energy is being devoted to them, so we can expect them to work smoothly soon.

Block grouping allows to group several blocks together, so when moving them up or down on the page, all of them move together. Block nesting means placing a block inside of a block, and there is no limit to the nesting depth, so we can have blocks inside of blocks inside of blocks inside of… (you’ve got me by now). Block nesting is especially useful for adding columns on the layout, through a column block, and then each column can contain inside any kind of block, such as images, text, videos, etc.

Block grouping and nesting

Blocks can be grouped together, and nested inside each other (Large preview)

Migration Of Pre-Existing Widgets

Whereas in the past there were several methods for adding content on the page (TinyMCE content, shortcodes, widgets, menus, etc.), the blocks attempt to unify all of them into a single method. Currently, newly-considered legacy code, such as widgets, is being migrated to the block format.

Recently, the “Latest Posts” widget has been re-implemented as a block, supporting real-time preview of how the layout looks when configuring it (changing the number of words to display, showing an excerpt or the full post, displaying the date or not, etc).

Latest posts widget

The “Latest posts” widget includes several options to customize its appearance (Large preview)

Motion Animation

Moving blocks up or down the page used to involve an abrupt transition, sometimes making it difficult to understand how blocks were re-ordered. Since Gutenberg 6.1, a new feature of motion animation solves this problem by adding a realistic movement to block changes, such as when creating, removing or reordering a block, giving a greatly improved visual cue of the actions taken to re-order blocks. In addition, the overall concept of motion animation can be applied throughout Gutenberg to express change and thus improve the user experience and provide better accessibility support.

Motion animation
Blocks have a smooth effect when being re-ordered. (Large preview)

Functionality (Hopefully) Coming Soon

According to WordPress founder Matt Mullenweg, only 10% of Gutenberg’s complete roadmap has been implemented by now, so there is plenty of exciting new stuff in store for us. Work on the new features listed below has either already started, or the team is currently experimenting with them.

  • Block directory
    A new top-level item in wp-admin which will provide block discovery. This way, blocks can be independently installed, without having to ship them through a plugin.
  • Navigation blocks
    Currently, navigation menus must be created through their own interface. However, soon we will be able to create these through blocks and place them anywhere on the page.
  • Inline installation of blocks
    Being able to discover blocks, the next logical step is to be able to install a new block on-the-fly, where is needed the most: On the post editor. We will be able to install a block while writing a post, use the new block to generate its HTML, save its output on the post, and remove the block, all without ever browsing to a different admin page.
  • Snap to grid when resizing images
    When we place several images on our post, resizing them to the same width or height can prove to be a painful process of trying and failing repeatedly until getting it right, which is far from ideal. Soon, it will be possible to snap the image to a virtual grid layer which appears on the background as the image is being resized.

WordPress Is Becoming Attractive (Once Again)

Several reasons support the idea that WordPress will soon become an attractive platform to code for, as it used to be once upon a time. Let’s see a couple of them.

PHP Modernization

WordPress’s quest to modernize does not end with incorporating modern JavaScript libraries and tooling (React, webpack, Babel): It also extends to the server-side language: PHP. WordPress’s minimum version of PHP was recently bumped up to 5.6, and should be bumped to version 7.0 as early as December 2019. PHP 7 offers remarkable advantages over PHP 5, most notably it more than doubles its speed, and later versions of PHP (7.1, 7.2 and 7.3) have each become even faster.

Even though there seems to be no official plans to further upgrade from PHP 7.0 to its later versions, once the momentum is there it is easier to keep it going. And PHP is itself being improved relentlessly too. The upcoming PHP 7.4, to be released in November 2019, will include plenty of new improvements, including arrow functions and the spread operator inside of arrays (as used for modern JavaScript), and a mechanism to preload libraries and frameworks into the OPCache to further boost performance, among several other exciting features.

Reusability Of Code Across Platforms

A great side effect of Gutenberg being decoupled from WordPress is that it can be integrated with other frameworks too. And that is exactly what has happened! Gutenberg is now available for Drupal, and Laraberg (for Laravel) will soon be officially released (currently testing the release candidate). The beauty of this phenomenon is that, through Gutenberg, all these different frameworks can now share/reuse code!

Conclusion

There has never been a better time to be a web developer. The pace of development for all concerned languages and technologies (JavaScript, CSS, image optimization, variable fonts, cloud services, etc) is staggering. Until recently, WordPress was looking at this development trend from the outside, and developers may have felt that they were missing the modernization train. But now, through Gutenberg, WordPress is riding the train too, and keeping up with its history of steering the web in a positive direction.

Gutenberg may not be fully functional yet, since it has plenty of issues to resolve, and it may still be some time until it truly delivers on its promises. However, so far it is looking good, and it looks better and better with each new release: Gutenberg is steadily bringing new possibilities to WordPress. As such, this is a great time to reconsider giving Gutenberg a try (that is, if you haven’t done so yet). Anyone somehow dealing with WordPress (website admins, developers, content managers, website users) can benefit from this new normal. I’d say this is something to be excited about, wouldn’t you?

Smashing Editorial
(dm, il)

Source: Smashing Magazine, The (Upcoming) WordPress Renaissance

1 2 3 4 5 6 7 8 9 10 ... 69 70   Next Page »