Collective #565

Inspirational Website of the Week: Bruno Arizio An outstanding design with some very engaging and elegant interactions. Our pick this week. Get inspired This content is sponsored via Thought Leaders Clubhouse.io: PM tool that’s easy on the eyes Try the modern project management Read more

Is There Such A Thing As Too Much Social Proof?

dreamt up by webguru in Uncategorized | Comments Off on Is There Such A Thing As Too Much Social Proof?

Is There Such A Thing As Too Much Social Proof?

Is There Such A Thing As Too Much Social Proof?

Suzanne Scacca



It’s very easy to start a business these days. But succeeding in that business is another story. There are just too many people who want to escape the 9-to-5, do something with their big idea and make a better life for themselves in the process. I totally applaud that.

However, it’s not practical to think that the idea will sell itself. Consumers need to be given some reason to trust that their money (or time) will be well spent. And when a business or product is new, the best way to gain this trust is by getting clients, customers and others to vouch for you.

That said, is it possible to go overboard with testimonials, reviews, case studies, client logos and other forms of social proof? And is there a wrong way to build social proof into a mobile website or PWA?

Yes and yes!

When Too Much Social Proof Is A Bad Thing

I was working on the copy for a new website earlier this year. My client told me that the design team had prepared a wireframe for the home page and wanted me to use that as a framework for the copy. Normally, I would be stoked. When I work as a writer, I want to stay in writer mode and not have to worry about layout and design suggestions.

The only problem was that the home page they wanted was littered with social proof and trust marks. It would’ve looked like this (note: the purple boxes contain social proof):

Sample wireframe with social proof

A sample wireframe of a home page with too much social proof. (Source: Canva) (Large preview)

In reviewing the wireframe, I had a number of gripes. For starters, it was way too long, especially for mobile visitors.


There was no way people were going to scroll seven times to find the section that finally invites them to take action.

Secondly, there was too much social proof. I know that seems counterintuitive. After all, isn’t it better to have more customer validation? I think in some cases that’s correct. Like with product reviews.

In BrightLocal’s 2018 Local Consumer Review Survey, respondents said they want to see at least 40 product reviews, on average, before believing a star rating.

BrightLocal number of reviews to believe start rating

BrightLocal’s consumer review survey says that consumers want to see 40 review before believing a business’s start rating. (Source: BrightLocal) (Large preview)

Even then, consumers aren’t looking for a perfect score. As you can see here, only 9% of respondents need a business to have a perfect rating or review in order to buy something from them:

BrightLocal star rating preference

BrightLocal survey respondents prefer to see a minimum of 3- or 4-star ratings instead of 5. (Source: BrightLocal) (Large preview)

And I can tell you why that’s the case.

I used to write product reviews. One of the things I’d do when assessing the quality of a product (before making my own judgments) was to look at what online reviewers — professional reviewers and customers — had to say about it. And let me tell you… there are tons of fake reviews out there.

They’re not always easy to spot on their own. However, if you look at enough reviews at once, you’ll start to notice that they all use the same verbiage. That usually means the company paid them to leave the review or gave family, friends and employees pre-written reviews to drop.

I’m not the only one who’s noticed this trend either. BrightLocal’s respondents have as well:

BrightLocal fake reviews

33% of BrightLocal respondents have seen lots of fake reviews while 42% have seen at least one. (Source: BrightLocal) (Large preview)

Only 26% of respondents said they hadn’t come across a fake review while 42% had seen at least one in the last year and 33% had seen a lot.

When it comes to things like testimonials and case studies, I think consumers are growing just as weary about the truthfulness of the praise.

TrustRadius surveyed B2B buyers on the subject of online reviews vs. case studies. This is what it found:

TrustRadius customer reviews vs. case studies

TrustRadius asked respondents to assess their feelings on customer reviews vs. case studies (Source: TrustRadius) (Large preview)

It makes sense why consumers don’t feel as though case studies are all that authentic, trustworthy or balanced. Case studies are written by the companies themselves, so of course they’re only going to share a flattering portrait of the business or product.

Having worked in the digital marketing space for a number of years, I can tell you that many customer testimonials aren’t always genuine either. That’s why businesses need need to stop worrying about how much social proof they have and start paying more attention to the truthfulness and quality of what they’re sharing with visitors.

The point I’m trying to make isn’t that we should ditch social proof. It’s an important part of the decision-making process for consumers. But just because it can affect their decision, it doesn’t mean that repeatedly bashing them over the head with it will work either. If your website and its messaging can’t seal the deal, a bunch of logos and quotes meant to convince them to buy won’t either.

What you need to focus on when building social proof into a mobile site or PWA is quality over quantity. Sure, you might want to highlight the sheer quantity of reviews that have been gathered on a product, but in terms of space on your website? With social proof, less is more.

Tips For Building Social Proof Into A Mobile Website Or PWA

You don’t have a lot of room to spare on mobile and you don’t want to make your visitors dig and dig to find the important details. So, while you do need social proof to help sell the business and its product, you need to do so wisely.

That means giving your content room to shine and strategically enhancing it with social proof when it makes the most sense to do so.

Consolidate Social Proof on the Home Page

I know how hard it can be to convince people to work with you or buy from you when your business is new. That’s especially the case when you’re entering a field that’s already dominated by well-known and well-reviewed companies.

However, rather than make your home page longer than it needs to be — for desktop or mobile visitors — why not consolidate the strongest social proof you have and put it in one section?

What’s neat about this option is that you can get creative with how you mix and match your social proof.

Customer Reviews + Trust Seals

Two Men and a Truck is the kind of company that needs customer testimonials. It’s the only way they’re going to effectively convince new customers to trust them to enter their home and carefully transport their belongings from one location to another.

Two Men and a Truck social proof

Local movers Two Men and a Truck stack a testimonial on top of trust seals on the home page. (Source: Two Men and a Truck) (Large preview)

Rather than bog down their home page with testimonials, Two Men and a Truck use one especially positive review and a number of professional trust seals to close the deal in one fell swoop.

Google Reviews + Facebook Reviews

Another way to consolidate social proof on the home page is by aggregating reviews from other platforms as the website of Drs. Rubinstein and Ducoff does:

Drs. Rubinstein and Ducoff home page with Google and Facebook reviews

The home page of Drs. Rubinstein and Ducoff shows off the latest reviews from Google and Facebook along with an average star rating across all platforms. (Source: Drs. Rubinstein and Ducoff) (Large preview)

This is a tiny section — it doesn’t even fill the entire screen — and yet it packs a lot of punch.

First, you have the total number of reviews and average star rating shown at the top. Remember that survey from BrightLocal? This is the kind of thing that would go a long way in convincing new patients to sign up. There’s a good amount of reviews to go on and the average rating seems realistic.

Also, because these reviews come from Google and Facebook, they’re connected to real people’s profiles. Plus, the date is included in the Google review.

Unlike testimonials which are just a quote and a person’s name (if we’re lucky), this is a quote, a star rating and the date it was published. This way, prospective patients don’t have to wonder how long ago it was that Drs. Rubinstein and Ducoff received these reviews.

Twitter + App Store Reviews + Awards

You’ll find another creative example of consolidated social proof on the Pocket website.

Pocket aggregates social proof from a number of sources

Pocket uses Twitter, the Google App Store, the Google Play Store Webby Awards as trust marks. (Source: Pocket) (Large preview)

Even though Pocket is free to use, that’s not necessarily enough to convince someone to try a new piece of software — especially if you want them to download it as a mobile app.

Rather than rely on faceless testimonials, though, Pocket has chosen to show off some convincing and verifiable social proof:

  • A quote from a Twitter user with a healthy follower base,
  • The actual rating of its app on both apps stores,
  • The number of times it’s won a Webby award.

It’s a unique patchwork of social proof which is sure to stand out from the traditional quote block many websites use to promote their products.

Make It Sticky

One of the great things about making the move to a PWA is you can use app-like elements like a sticky bar to show off important information to visitors. If it makes sense to do so, you could even put some social proof there.

Google Reviews Widget

There’s been a big surge in independent mattress companies in recent years. Tuft & Needle. Loom & Leaf. Saatva. They all seem to promise the same thing — a better quality memory foam mattress at a steal of a price — so it’s got to be hard for consumers to choose between them.

One way to make this differentiation is with Google Reviews.

On the desktop website for Lull, the home page reviews callout is tucked into the bottom-left corner.

Lull Google customer reviews on desktop

Lull shares Google customer reviews in a widget on its desktop website. (Source: Lull) (Large preview)

It’s almost too small to notice the reviews with so much more to take in on the home page. That’s a good thing though. The social proof is always present without being overwhelming.

What’s interesting to note, though, is that the mobile counterpart doesn’t show any Google reviews on the home page. It’s not until someone gets to the Mattress page where they’re able to see what other customers have said.

Lull Google customer reviews on mobile

Lull shares Google customer reviews in a sticky bar on its PWA. (Source: Lull) (Large preview)

In this particular screenshot, you can see that the Mattress page on the PWA has a section promoting the product’s reviews. However, even when visitors scroll past that section, the sticky bar continues to remind them about the quantity and quality of reviews the mattress has received on Google.

CTA Banner

Another type of website this sticky social proof would be useful for would be one in hospitality. For example, this website for the Hyatt Regency San Antonio:

Hyatt Regency San Antonio Suites page

An example of one of the Suites pages on the Hyatt Regency San Antonio website. (Source: Hyatt Regency San Antonio) (Large preview)

Just like the Lull example above, the Hyatt Regency tucks its social proof into a sticky bar on its internal sales pages.

Hyatt Regency sticky bar with social proof

The Hyatt Regency places TripAdvisor reviews next to its conversion elements in a sticky bar. (Source: Hyatt Regency San Antonio) (Large preview)

Visitors see the number of TripAdvisor reviews and star ratings when they first enter the Suites page. When they scroll downwards, the sticky bar stays in place just long enough (about one full scroll) for visitors to realize, “Cool. It’ll be there if or when I’m ready to do more research.”

What’s nice about how this particular sticky bar is designed is that the reviews are part of the conversion bar. It’s kind of like saying, “Want to book your trip, but feeling nervous about it? Here’s one last thing to look at before you make up your mind!”

Create a Dedicated Page for Social Proof

If you’re not building a PWA or you have too much social proof to show off in a small space, create a dedicated page for it. This is a great option, too, if you plan to share something other than just testimonials or reviews.

Testimonials/Reviews

Winkworth is an estate agency in the UK. Testimonials are a useful way to convince other sellers and lessors to work with the agency. Yet, the home page doesn’t have any. Instead, the company has chosen to place them on a Testimonials page.

Winkworth testimonials page

The Winkworth estate agency keeps its home page free of testimonials and instead places them on a dedicated page. (Source: Winkworth) (Large preview)

It’s not as though this page is just a throwaway of every positive thing people have said. The testimonials look like they’ve been hand-picked by Winkworth, especially the longer ones that contain more details about the experience and the people they worked with.

Winkworth testimonials

An example of some of the hand-picked testimonials Winkworth has gathered for its Testimonials page. (Source: Winkworth) (Large preview)

Each testimonial includes the person’s name as well as which Winkworth location they’re referring to. This way, visitors can learn more about the experience at specific locations instead of just hearing how great Winkworth is as a whole.

Case Studies

It’s not just testimonials that could use their own page. Case studies shouldn’t clutter up the home page either.

While Bang Marketing promotes its case studies with a promotional banner on the home page, that’s all you hear of it there. They save their customers’ stories for individual pages like this one:

Bang Marketing case studies

Bang Marketing includes a video testimonial with every case study. (Source: Bang Marketing) (Large preview)

Each case study page is minimally designed, but captures all of the information needed to tell the story.

First, there’s a video from the client explaining what Bang Marketing was able to do for them. Then, there’s a brief description of what the team worked on. Finally, high-quality images provide visitors with a look at the resulting product.

This is a much more effective way to share case studies than placing a barrage of portfolio images all over the home page.

Press

There are two ways to handle the Press section of a website. The company could publish its own press releases or it can share information about where it’s been featured in the press.

While the former is useful for sharing company news and wins with visitors, it’s just too self-promotional and won’t help much with conversion. The latter option could really make a big impact though.

This, for instance, is what visitors will find on the About & Press page for The Dean Hotel:

The Dean Hotel Boston magazine cover

The Dean Hotel includes magazine covers and article screenshots as social proof on its website. (Source: The Dean Hotel) (Large preview)

After a short intro of the hotel, the rest of the page is covered in magazine covers and article screenshots that go back as far as 2013. Visitors can click through to read each of the articles, too.

Dean Hotel social proof

The Dean Hotel uses articles as social proof on its website. (Source: The Dean Hotel) (Large preview)

This is a unique way for a website of any type to share social proof with visitors.

If your client happens to have a bunch of positive press and never-ending hype surrounding its brand, try to leverage that on the site. Plus, by including screenshots from the articles themselves, you get another opportunity to show off the product (or, in this case, the hotel and its rooms).

Wrapping Up

Consumers have become very savvy when it comes to marketing and sales online. That’s not to say that they don’t fall for it — usually, when it’s done genuinely and with transparency. However, we’re at a point where a brand saying, “We’re the best! Trust us! Buy from us!”, doesn’t usually cut it. They need more validation than that.

At the end of the day, your mobile website or PWA needs social proof to convince visitors to convert.

That said, be careful with how you build social proof into the site, especially on the home page. You don’t have time or space to waste, so don’t create something unnecessarily bulky just so you can show off how many testimonials, reviews, case studies, client logos high-profile partnerships you have. This is about quality over quantity, so make it count.

Smashing Editorial
(ra, yk, il)

Source: Smashing Magazine, Is There Such A Thing As Too Much Social Proof?

15 Questions To Ask Your Next Potential Employer

dreamt up by webguru in Uncategorized | Comments Off on 15 Questions To Ask Your Next Potential Employer

15 Questions To Ask Your Next Potential Employer

15 Questions To Ask Your Next Potential Employer

Robert Hoekman Jr



In my book “Experience Required”, I encourage in-house UX professionals to leave companies who refuse to advance their UX intelligence and capability. There are far too many companies these days who understand the value of UX to waste your time being a martyr for one who will only frustrate you. Your best chance of doing a good job is to avoid a bad position.

Smartly, during a recent Q&A about the book, an audience member asked how we can avoid taking these jobs in the first place. What kinds of questions, he wondered, can you ask during an interview to spot red flags before the company stabs the whole flagpole into your sacred UX heart?

Know What You Want To Know

There’s the usual stuff, sure, such as asking why the position you’re applying for is currently open. What the company’s turnover rate is like. Why that turnover rate is so low or high. A little Googling will easily enough net you a decent list of broad questions you can ask any employer.

But what you really want is to get UX-specific. You want to hone in on precisely what your life might be like should you take the position.


Your best chance of doing a good job is to avoid a bad position.

Sadly, I lacked a great answer at the time to the question about interview questions, so I let it eat at me until I woke up at three a.m two days later and started writing notes. That morning, I emailed my reply to the moderator.

Ask A Great Question, Then Shut Up

To devise the list below, I considered what kinds of things I’d wish a company knew and understood about UX prior to working with them. I can operate in all kinds of situations—as a UX and process innovation consultant, this has been my job, and pleasure, for nearly 13 years now—but I want to know from the start, every time, that the effort will be set up for success. These questions aim to uncover the dirty details that will tell me what I’m walking into.

Much like a good validation session or user interview, these questions are open-ended and designed to draw out thoughtful, long-winded responses. (One-word answers are useless.) I strongly recommend that when and if you ask them, you follow each question with a long, stealthy vow of silence. People will tell you all about who they are if you just shut up long enough to hear them do it. Stay quiet for at least ten seconds longer than you think is reasonable and you’ll get the world.


People will tell you all about who they are if you just shut up long enough to hear them do it.

I’d ask these questions of as many individuals as possible. Given that tech interviews are often hours-long and involve many interviewers, you should be able to grab yourself a wealth of good answers before you head out the door to process and sleep.

If, on the contrary, you are given too little time to ask all these questions, prioritize the ones you’re personally most concerned about, and then consider that insufficient interview time might be a red flag.

Important: The key to the answers you receive is to read between the lines. Listen to what is said, note what is not said, and decide how to interpret the answers you get. I’ve included some red flags to watch out for along with each question below.

The Questions

Let’s get right to it.

1. How does this company define UX? As in, what do you believe is the purpose, scope, and result of good UX work?

Intent

Literally every person on Earth who is asked this question will give a slightly, or wildly, different answer than you expect or hope for. At the very least, the person interviewing you should have an opinion. They should have a sense of how the company views UX, what the various UX roles have to offer, and what effect they should have.

Red Flag(s)

The UX team has a very limited role, has no real influence, and the team, for the most part, is stretched so thin you could put them on a cracker.

2. How do the non-UX people on your product team currently participate in UX decisions?

Follow-ups: Describe a recent example of this kind of participation. What was the UX objective? How was that objective vetted as a real need? What did you do to achieve the objective, step-by-step? How did it turn out? What did you learn?

Intent

Find out how the entire product team approaches UX and how collaborative and supportive they might be in acquiring and acting on good research insights.

Red Flag(s)

They don’t participate in UX decisions.

3. What UX roles exist in the organization, and what do they do?

Intent

Determine where you’ll fit in, and how difficult it might be for you to gain influence, experience, or mentorship (depending on what you’re after). Also, build on the previous question about who does what and how.

Red Flag(s)

UX people at the company are heavily skilled in graphic design, and not so skilled in strategy. The current team members have limited influence. Your role will be similar. Strategy is handled by someone else, and it trickles down to the UX team for execution.

4. Who is your most experienced UX person and in what ways does that experience separate them from others?

Intent

Determine the range of UX intelligence on the team from highest to lowest. Is the person at the top whip-smart and a fantastic leader? Does that person mentor the others and make them better?

Red Flag(s)

The interviewer cannot articulate what makes that person better or more compelling than others. If they can’t answer this question, you’re speaking to someone who has no business making a UX hiring decision. Ask to speak to someone with more inside knowledge.

Noteworthy, but not necessarily a red flag: If you learn that the most experienced person on the team is actually someone with a very sleight skill set, this can mean either there’s room for you to become an influencer, or the company puts so little value on UX that they’ve selected only employees with a small view of UX. The latter could mean you’ll spend all your time trying to prove the value of bigger UX involvement and more strategic work. You may like that sort of thing. I do. This would not be a red flag for me. It might be for you.

5. What are the company’s plans for UX long-term? (Expand it? Reduce it? How so, and why? Is there a budget for its expansion? Who controls it and how is it determined?)

Intent

Map out your road for the next couple of years. Can you rise into the role you want? Or will you be stuck in a cul-de-sac with zero chance of professional growth?

Red Flag(s)

We plan to keep doing exactly what we do now, and what we do now is pretty boring or weak. Also, we have no budget—like, ever—so if you want to bring in a consultant, attend a seminar, hire another person, or run a comprehensive usability study with outside customers, well, good luck with that.

6. How do UX professionals here communicate their recommendations?

Follow-up: How could they improve?

Intent

Learn how they do it now, and more importantly, whether or not it works.

Red Flag(s)

The interviewer has no answer, or—far worse—has an anti-answer that involves lots of arm-waving and ideas falling on deaf ears. The former can, again, mean the interviewer has no business interviewing a UX candidate. The latter can mean the UX team is terrible at communicating and selling its ideas. While this can be overcome with your much better communication skills, it will almost certainly mean the company has some baggage to wade through. Poor experiences in the past will put other product team members on defense. You’ll have to play some politics and work extra heard on building rapport to get anywhere.

7. Who tends to offer the most resistance to UX recommendations and methods and why?

Follow-up: And how much power does that person have?

Intent

This person will either give you the most grief or will give you the great opportunity to improve your communication skills (remember: design is communication!). Knowing who it is up front and how that person operates can tell you what the experience will be like.

Red Flag(s)

Executives, because they distrust UX. If you lack support at the top, it will be a daily struggle to achieve anything substantive.

8. What do UX practitioners here do to advance their values and methods beyond project work? Please be specific.

Intent

See how motivated the UX team is to perpetuate UX values to the rest of the company and improve how the team works.

Red Flag(s)

They don’t.

9. What do you think they should do differently? Why?

Intent

Discover how your interviewer feels about UX. This is, after all, a person who has a say in hiring you. Presumably, this person will be a big factor in your success.

Red Flag(s)

Keep their noses out of product development, stop telling the engineers what to do (speaks to perception of pushy UX people).

10. Describe a typical project process. (How does it start? What happens first? Next? And then?)

Intent

Find out if there is a process, what it looks like, and how well it aligns with your beliefs as a UX professional.

Red Flag(s)

You’ll be assigned projects from the top. You’ll research them, design a bunch of stuff in a vacuum with no way to validate and without any iteration method, and then you’ll hand all your work to the Engineering team, who will then have a thousand questions because you never spoke to each other until just now.

Bonus Question

How and when does the team try to improve on its process? (If it doesn’t, let’s call that a potential red flag as well.)

11. How has your company learned from its past decisions, and what have you done with those learnings?

Intent

UX is an everlasting experiment. Find out if this company understands it’s supposed to learn from the work and become smarter as a result.

Red Flag(s)

No examples, no thoughts.

12. If this is an agency who produces work for clients: What kind of support or backup does this agency provide for its UX recommendations, and how much power does the UX group have to push back against wrongheaded client ideas?

Follow-ups: How does the team go about challenging those ideas? Provide a recent example.

Intent

Find out how often you’ll be thrown under the proverbial bus when a client pushes back against what you know to be the right approach to a given problem. Your job will be to make intelligence-based recommendations; don’t torture yourself by working with people who refuse to hear them.

Red Flag(s)

The interviewer says the agency does whatever the clients demand. You will be a glorified wireframe monkey with no real power to change the world for the better.

13. How does the company support the UX group’s work and methods?

Intent

Determine how the company as a whole thinks about UX, both as a team and a practice. Is UX the strange alien in the corner of the room, or is it embraced and participated in by every product team member?

Red Flag(s)

UX is a strange alien. Good luck getting anyone to listen to you.

14. What design tools (software) does your team use and why?

Follow-ups: How receptive are people to trying new tools? How does evolution happen?

Intent

Know what software you should be familiar with, why the team uses it, and how you might go about introducing new tools that could be better in some situations.

Red Flag(s)

Gain insight into how the team thinks about the UI portion of the design process. Does it start with loose ideas drawn on napkins and gradually move toward higher-quality? Or does it attempt to start with perfection and end up throwing out a lot of work? (See the next question for more on this.)

15. Does a digital design start low-fi or high-fi, and what is the thinking behind this approach?

Follow-up: If you start lo-if, how does a design progress?

Intent

You can waste a lot of hours on pixel-perfect work you end up throwing out. A company who burns through money like that is also going to be the first one to cut staff when things get tight. No idea should be carried through to its pixel-perfect end until it’s been collaborated on and vetted somehow, so you want to know that the company is smart enough to start lo-fidelity and move gradually to hi-fidelity. Hi-fi work should be the result of validation and iteration, not the start of it. A lo-fi > hi-fi process mitigates risk.

Red Flag(s)

All design work starts and ends in Photoshop or Sketch, and is expected to be 100% flawless and final before anyone sees what you’ve produced.

Running The Interview

In an unrelated Q&A years ago, a hiring manager asked how to spot a good UX professional during an interview. I answered that he should look for the person asking all the questions. I repeated this advice in Experience Required.

Now you can be the one asking all the questions.

And in doing so, not only will you increase your odds of being offered the gig, you’ll know long before the offer shows up whether to accept it.

If you, dear reader, have more ideas on how to scavenger-hunt a company’s red flags, we’re all ears. Tell us about it in the comments below.

Smashing Editorial
(cc, il)

Source: Smashing Magazine, 15 Questions To Ask Your Next Potential Employer

Collective #550

dreamt up by webguru in Uncategorized | Comments Off on Collective #550




C537_divi

Our Sponsor

The Ultimate WordPress Page Builder

You’ve never built a WordPress website like this before. Divi is more than just a WordPress theme, it’s a completely new website building platform that replaces the standard WordPress post editor with a vastly superior visual editor.

Try it





C550_hooks

Thinking in React Hooks

Amelia Wattenberger’s draft for a guide to the fundamental mindset change when switching from React class components to functional components + hooks.

Read it









C550_raindrop

Raindrop

A rain effect that realistically interacts with elements on a page. By Neal Agarwal.

Check it out


C550_v8

Nullish coalescing

Learn about the nullish coalescing proposal that adds a new short-circuiting operator meant to handle default values in JavaScript.

Read it





Collective #550 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #550

How AI Is Helping Solve Climate Change

dreamt up by webguru in Uncategorized | Comments Off on How AI Is Helping Solve Climate Change

How AI Is Helping Solve Climate Change

How AI Is Helping Solve Climate Change

Nicholas Farmen



Have you heard of the French artist Marcel Duchamp? One of his most famous works is the “fountain” which was created from an ordinary bathroom urinal. In simply renaming this common object, Duchamp successfully birthed a completely new style of art.

The same can be done with AI. Why do humans only have to use this powerful invention to solve business-related issues? Why can’t we think a little more like Duchamp and use this ‘all-powerful’ technology to solve one of the scariest problems that mankind has ever faced?

The Global Threat Of Climate Change

If you’ve read any recent reports and predictions about the future of our climate, you’ve probably realized that mankind is running out of time to find a solution for the global threat of climate change. In fact, a recent Australian policy paper proposed a 2050 scenario where, well, we all die.

To those who aren’t scared of water levels rising 25 meters by 2050, there have been other studies that suggest human hardships are right around the corner. In March of 2012, the World Water Assessment Programme predicted that by 2025, 1.8 billion people on earth will be living in regions with absolute water scarcity.

So what data and research is leading scientists to believe there will be a water or food apocalypse scenario in the future?

According to NASA, the main cause of climate change is the rising amount of greenhouse gases in our atmosphere. And sadly, ‘mother earth’ is not doing this all by herself.

In 1830, humans began engaging in activities that released greenhouse gases, contributing to the rising temperatures that we are feeling today. Some of these activities I refer to include the burning of fossil fuels, the pollution of oceans, and deforestation. However, even the mass production of beef is contributing to climate change.

Now, you may be wondering how humans could combat and limit our greenhouse gas emissions. Obviously, we should be limiting all of the activities that I alluded to above. This would mean limiting our electricity, coal, and oil usage, planting trees, and sadly for many, giving up steak dinners altogether.

But would all of this be enough to undo centuries of atmospheric pollution? Is all of this even accomplishable before humans are forced to face the extinction of their species? I don’t know. Humans haven’t even been able to cease the production of beef, let alone our daily oil-guzzling automobiles and airplanes.

If only there was a very intelligent software that could run some emissions numbers, and tell us if all of these efforts would be enough to prevent future disaster scenarios…

An iceberg melting due to climate change

Melting icebergs, a symbol of global warming. (Image source: Unsplash) (Large preview)

AI Approaches And Environmental Use Cases

Solving any problem takes time. With climate change, it took scientists about 40 years to to gain any sort of understanding of the problem. And that’s fair — humans had to first study the climate to make sure climate change existed, then study the causes of climate change to see the role humans have played. But where are we today after all of this study? Still studying.

And the problem with climate change is that time is not on our side — mankind has to find and implement some solutions relatively fast. That’s where AI could help.

To date, there are two different approaches to AI: rules-based and learning-based. Both AI approaches have valid use cases when it comes to studying the environment and solving climate change.

Rules-based AI are coded algorithms of if-then statementsthat are basically meant to solve simple problems. When it comes to the climate, a rules-based AI could be useful in helping scientists crunch numbers or compile data, saving humans a lot of time in manual labor.

But a rules-based AI can only do so much. It has no memory capabilities — it’s focused on providing a solution to a problem that’s defined by a human. That’s why learning-based AI was created.

Learning-based AI is more advanced than rules-based AI because it diagnoses problems by interacting with the problem. Basically, learning-based AI has the capacity for memory, whereas rules-based AI does not.

Here’s an example: let’s say you asked a rules-based AI for a shirt. That AI would find you a shirt in the right size and color, but only if you told it your size and preferences. If you asked a learning AI for a shirt, it would assess all of the previous shirt purchases you’ve made over the past year, then find you the perfect shirt for the current season. See the difference?

When it comes to helping solve climate change, a learning-based AI could essentially do more than just crunch CO2 emission numbers. A learning-based AI could actually record those numbers, study causes and solutions, and then recommend the best solution — in theory.

AI Impacting Climate Change, Today

To most, AI is buzz word used to describe interesting tech software. But to the companies below, AI is starting to be seen as a secret weapon.

SilviaTerra

Forests are important for our climate. The carbon dioxide that’s emitted by many human activities is actually absorbed by trees. So if we just had more trees.

This is why SilviaTerra was brought to life.

Powered by the funds and technology of Microsoft, SilviaTerra uses AI and satellite imaging to predict the sizes, species, and health of forest trees. Why is this important? It means that conservationists are saved countless hours of manual fieldwork. It also means that we can help trees grow bigger, stronger, and healthier, so they can continue to help our climate.

DeepMind

Sometimes, we may ask ourselves, “What can’t Google do?” Well, it turns out Google can’t really do everything.

Looking to improve their costs (and potentially their carbon footprint), Google turned to a company called DeepMind. Together, the two companies developed an AI that would teach itself how to use only the bare minimum amount of energy necessary to cool Google’s data centers.

The result? Google was able to cut the amount of energy they use to cool their data centers by 35%. But that may not even be the coolest part! DeepMind’s co-founder, Mustafa Suleyman, said that their AI algorithms are general enough to where the two companies may be able to use them for other energy-saving applications in the future.

An AI robot

Although AI is still very controversial among many, it’s difficult not to agree on how it can help to improve sales, productivity, and even customer service. (Image source: Unsplash) (Large preview)

Green Horizon Project

All of you data-lovers out there know that it’s hard to say you’re impacting something if you’re unable to measure your impact. This is why the Green Horizon Project came about.

IBM’s Green Horizon Project is an AI that creates self-configuring weather and pollution forecasts. IBM created the project with the hope that they could help cities become more efficient, one day.

Their aspirations became a reality in China. Between 2012 and 2017, IBM’s Green Horizon Project helped the city of Beijing decrease their average smog levels by 35%.

CycleGANs

So here’s a term you may never heard of in your life: “GAN.” It stands for Generative Adversarial Network. Basically, it’s a network that generates statistics or information without you having to do anything.

Why is the term important? Because automation is important when you have limited time and resources to solve a problem.

Intellectuals of Cornell University used GANs to create an AI to train itself to produce images that portray geographical locations before and after extreme weather events. The visuals produced by this AI could help scientists predict the impacts of certain climate changes, helping humans prioritize our combative efforts.

Software With The Potential To Impact Climate Change

In studying the number of AI that is already being used to have a positive impact on climate change, you may be thinking that we don’t need any more new software. And maybe you’re not wrong — why don’t we repurpose the software we do have?

With that being said, here are a few software with the potential to be secret weapons:

Airlitix

Airlitix is an AI and machine-learning software that is currently being used in drones. While it was originally developed to automate greenhouse management processes, it could quite easily be used to manage the health of national forests. Airlitix has the capacity to not only collect temperature, humidity, and carbon dioxide data, but the AI can also analyze soil and crop health.

But with humans needing to plant over 1.2 trillion trees to combat climate change, we should consider automating our efforts further. Instead of taking the time to tend to national parks, the Airlitix software could be built upon so that drones could plant our trees, release plant nutrients, or even deter forest arsonists.

A drone in a national park

Too many times, drones have proved to be useful during times of natural disasters. (Image source: Unsplash) (Large preview)

Google Ads

Both Google and Facebook have very powerful AI software that they currently use to create relevant consumer ads using consumer browsing data. In fact, Google’s AI ‘Google Ads’ has helped their company earn hundreds of billions in revenue.

While revenue is cool, the Google Ads algorithm currently promotes consumer purchases relatively objectively. Imagine if the AI could be rewritten to prioritize the ads of companies that are offering sustainable products and services.

Nowadays, there isn’t much competition for Google. There’s Bing, Yahoo, DuckDuckGo, and AOL. (Out of the people I know, I don’t know any that use AOL.) If you’re feeling fearless, maybe you could develop a new search engine that helps connect consumers with environmentally-friendly companies.

Sure, it would be hard to compete with companies as large as Google, but you don’t have to compete forever to make a profit. There’s always a chance your startup gets acquired, and then you ride off into the sunset.

AlphaGo

While AlphaGo is an AI software that could help scientists find the next ‘wonder drug,’ it was originally created by DeepMind to teach itself how to master the game of chess. After beating the world’s best chess players, the AlphaGo AI has since moved on to conquer the strategy of more complex board games.

But what do board games have to do with climate change? Well, if the AlphaGo AI can outsmart humans in a game of chess, maybe it can outsmart us in coming up with creative ways to limit and reduce the number of greenhouse gases in our atmosphere.

Future Outlook For AI And Climate

As I see it, the purpose of AI is to assist mankind in solving problems. Climate change has proven to be a complex problem that humans are becoming great at studying, but I have yet to see a very positive future-outlook from environmentalists in the news.

If not to help humans influence climate change directly, couldn’t we use AI to portray doomsday scenarios that scare the world into coming together? Could we use AI to portray positive potential outlooks that would be possible if people were to do more in their daily lives to help triage climate issues?

Even with the latest Amazon fires, I didn’t see any tweets about the idea of using drones to combat the spread of flames. It’s clear to me that even with all of the impressive AI software and tech available to humans today, environmental use cases are still not widespread knowledge.

So my advice to readers is to try the ‘Duchamp approach’ — today. Consider the AI and tech that you use or develop regularly, and see if there’s a way to reimagine it. Who knows, you may be the one to solve a problem that has stumped some of the best climatologists and scientists of our time.

How Improving Website Performance Can Help Save The Planet

Climate change may not seem like an issue that should concern web developers, but the truth is that our work does have a carbon footprint, and it’s about time we started to think about that. Read more →

Smashing Editorial
(cc, yk, il)

Source: Smashing Magazine, How AI Is Helping Solve Climate Change

My Design Process Of The Cover Design For Smashing Magazine Print Issue #1

dreamt up by webguru in Uncategorized | Comments Off on My Design Process Of The Cover Design For Smashing Magazine Print Issue #1

My Design Process Of The Cover Design For Smashing Magazine Print Issue #1

My Design Process Of The Cover Design For Smashing Magazine Print Issue #1

Veerle Pieters



Back in 2016, Vitaly Friedman asked me to design the cover and layout for a printed version of Smashing Magazine, a magazine for web designers and developers. The design I created back then for the cover and inside template layout, however, was shelved for a while as the project was paused for about two years owing to other priorities. Later, after Smashing Magazine launched its new website, a new style was born, and the design I had come up with didn’t really match anymore. So it was dropped.

illustration used for the cover page

(Large preview)

Around mid 2018, the project was reignited, and I was asked to design a new layout template for the magazine. Later, around the beginning of this year, I also redesigned the cover. Now, the pilot issue of a shiny new Smashing Magazine Print has been launched.

Old cover designs created back in 2016 for Smashing Magazine Print

Old cover designs created back in 2016 for Smashing Magazine Print. (Large preview)

table of contents pages

(Large preview)

I’m very happy they chose my initial design of the table of contents, as I was really fond of it myself. The version I created later (see the above image to the right) was very different, since I went for something closer to the current design style.

first page design for the credits

(Large preview)

In my first design back in 2016, I could choose the typefaces, and I had total freedom over the design style. It was totally different — very geometric and more modernistic. So I was very happy to see that some of the designs were adopted in the magazine’s final layout, like the table of contents and this page design for the introduction.

Reshape to Fit the New Design Style

The challenge now was to reshape the design to fit the current style of orange-red roundness, and cartoon cats. The answer was, of course, very simple: start from scratch.

Brainstorming and Sketching

Fortunately, the theme of the first edition had been identified, which made it easier for me to think about a suitable illustration. Smashing Print #1 would be about ethics and privacy. My first idea in terms of the overall design concept was to try out something along the direction of Noma Bar’s negative space design style. That’s easier said than done, of course, but I thought it would be awesome if I could pull it off and come up with something clever like that.

sketched of an eye, a keyhole and a magnifying glass that came to mind as suitable subjects to use in the illustration

(Large preview)

After writing down a few keywords (spying, watching, tracing), things like an eye, a keyhole and a magnifying glass came to mind as suitable subjects to use in my illustration. As for “tracing” I thought of a trail of digital data, which I saw in the shape of a perfect curvy line with ones and zeros. So I doodled a couple of basic ideas.

Inspiration Browsing

While designing this cover I did a lot of browsing around. Here are a couple of images that inspired me. The bottom-left one inspired me purely in terms of the layout. In the top-right one I really like the rounded shapes, plus its simplicity and contrasting colors. The middle-top and bottom-right ones use cute figures and a fun, vertical 2D approach. The top-left one has nice smooth shapes and colors, and I like its strong image. There were more images, for sure, but these five did it for me.

Images that served as inspiration for the cover design

Images that inspired me for the cover design. (Large preview)

First Design

Choosing Colors

I often start a design by choosing my color palette first. The colors I picked here were chosen purely because I felt they go well together. I wasn’t sure I would use all of them, but somehow I’m used to having a color palette in circles placed above my artboard. Then I use the color picker tool to select the color fill I want to apply, or I select them all and make them global swatches.

Selecting a color palette

Selecting a color palette. (Large preview)

Then I worked with the doodle of the magnifying glass as an eye in Illustrator and played around with a bit of color and composition. I thought adding some colored bars at the bottom would give the illustration an eye-catching touch. They represent digital data gathered from users, converted into analytical graphs.

first drafts of the cover design

(Large preview)

I ended up with the design shown to the left. (Ignore the name of the magazine, as this was changed later on.) I wasn’t sure how much of the Smashing orange-red I should use, so I tried out a version with a lot of orange as well, even though I preferred the other one.

While I did like the result, the idea of doing something with a trail also appealed to me as a second concept. I visualized a person walking around with a smartphone leaving a literal trail of all their interactions. That trail was then picked up, and zoomed in to and saved and analyzed. At the beginning of the trail I added a magnifying glass. I would have also mixed in some graph bars, but at this point I didn’t know where or how exactly I would incorporate them into my composition, though I was already playing with the idea of using some sort of rounded shape background, combined with some subtle patterns.

initial sketches and doodles

(Large preview)

Typically, I don’t sketch out my entire design. I only quickly doodle the idea and sketch out the elements I need in more detail, like the person with the phone. Once I had the concept fixed in my mind, I started out designing in Adobe Illustrator. First, I created a grid of guides to be used for the background shapes, and also for positioning the trail and figure. There were a couple of steps to get to this final design.

Final Design

Setting Up a Grid

The inspiration image at the bottom left encouraged me to go for a layout with a lot of white space at the top for the title and some white space at the bottom to add three key articles. As for the illustration itself, I envisioned using a square grid, perhaps going all the way over the spine and back.

Final cover design shown in Adobe Illustrator with grid guides and layers panel

Final cover design shown in Adobe Illustrator with grid guides and layers panel. (Large preview)

I created this square grid and placed the guides in a separate layer. Once this was set up, I started with the walking man and his smartphone, positioning him somewhere at the top-left.

Next came the curvy path. I just drew an angled line on top of the grid and used the corner widget to convert these into perfect rounded corners. I was thinking of using ones and zeros on the trail, because that’s how I visualize digital data. I turned the curvy path into a fine dotted line with a very wide gap to use as a guide to place the numbers. Once I started to place the numbers on each dot, it looked way too busy, so I decided to place one tiny dot between each number.

The next thing in the process was the creation of the background. I only had a vague idea in my head: a composition of geometrical vertical shapes with rounded corners in different colors from the palette. During this phase, I did a lot of experimenting. I moved and recolored shapes over and over. Once I had the flat colored shapes finished, I started adding in patterns on top. I tried out tiny dot grids that I randomly shaped in length and width, and applied color to. This was all a matter of intuition, to be honest, trying out something, then trying out something else, comparing both and choosing what worked best: changing color, changing the transparency mode, opacity value, and so on.

The bar graphs and icons were created in the last phase, together with the magnifying glass, and the spine and back. I just kept the idea at the back of my head, and waited till I had the man and the background shapes ready. Finally, I added in some basic icons to refer to the type of action made on the data, such as geolocation.

Back Cover

As for the back cover, I had already envisioned the background composition going all the way around, only much lighter. That’s how I came up with the idea of using a light area in the center with a couple of intersecting colored lines there.

back cover design of the magazine

(Large preview)

In the final printed version, text is added in the center space, nicely framed in a rounded box with a yellow border, so the composition of the lines you see here has been removed and doesn’t match the printed version.

Spine

For the spine, I’d had the fun idea earlier of having the Smashing logo build up with each release (see image at the top of the article), but the tricky thing here is that each edition needs to have the exact same thickness or the whole concept falls apart. It wasn’t realistic since I wasn’t sure each edition would have exactly the same page count. I had to remember that the width of the spine could vary. So I came up with the idea of using some sort of pattern combinations that can vary in width, but still have the magazines connected.

spine designs of the magazine

(Large preview)

The general idea was also to use a different theme pattern for each issue. The pilot issue uses fine dots in combination with a capsules pattern. In the spine I use a couple of others. The idea is to achieve a coherent composition when you place or stack them in the right order, which serves also a motivation to buy all issues. 😉

Drawing Can Be Really Simple

Here I’ll describe a quick process of a simple detail of the cover illustration: the creation of the walking man’s face. I know a lot of people are convinced that drawing in Adobe Illustrator isn’t easy and that you have to use the pen tool a lot, but that’s not true. You can create beautiful illustrations using only simple shapes like rectangles and circles, combined with the corner widget, pathfinder options and align tools.

Quick Design Process Of The Walking Man

If you keep the shapes in your illustration as simple, flat 2D, drawing in Adobe Illustrator can be easy. Take the head of the walking man. I didn’t even use the pen tool. I’ve only used simple shapes: rectangles and a circle, and these steps:

the head of the walking man brought to life in adobe illustrator

(Large preview)
1. Rectangles and circle

With the sketch in the background, I drew a rectangle for each part of the head, and a circle for his ear.

2. Align and unite

Next, I used the align options to align the shapes correctly, and the Pathfinder > Unite option, and I also moved the top-left corner point a little to the right for his nose, using the key.

aligning and adding rounded corners

(Large preview)
3. Rounded corners

Then, with the Direct Selection tool (white arrow) I created the rounded corners for the hair and chin.

4. Arrange and apply color

All that remains is removing the strokes and applying a proper fill color for each shape. Last but not least, I made sure the shapes were in the correct stacking order by using the Object > Arrange options.

Chapter Illustrations

The chapter illustrations also have a bit of my handiwork. Below are the illustrations created by someone else, but the request came to improve them a little bit and make them full-page.

Chapter illustrations already created but needing to be improved

Chapter illustrations already created but needing to be improved. (Large preview)

And so I did. Below are the ones I delivered to Smashing Magazine and which were implemented in the final version.

Note: As you can see, I’ve incorporated the dotted pattern and modified some of the icons a little bit, but I kept the overall illustration style.

For the first chapter, there was no image, so that one was based on the style already in place.

six of the chapter illustrations created from ones already in place (with the exception of chapter 1)

The six chapter illustrations created from ones already in place (with the exception of chapter 1). (Large preview)

I hope you’ve enjoyed my design process story and the quick process tutorial. Don’t forget to check out the pilot issue of Smashing Magazine Print (view sample PDF). It’s a must-have for any web designer! Enjoy!


The cover of Smashing Magazine Print

Print

$ 17.50 $ 24.95Get Smashing Print

Printed magazine + PDF, ePUB, Kindle.
Free airmail shipping worldwide.

Print + Membership

$ 9 / mo.Become a Smasher

Printed magazine for free + DRM-free eBooks, webinars and other fancy things.

Smashing Editorial
(vf, yk, il)

Source: Smashing Magazine, My Design Process Of The Cover Design For Smashing Magazine Print Issue #1

A Pain-Free Workflow For Issue Reporting And Resolution

dreamt up by webguru in Uncategorized | Comments Off on A Pain-Free Workflow For Issue Reporting And Resolution

A Pain-Free Workflow For Issue Reporting And Resolution

A Pain-Free Workflow For Issue Reporting And Resolution

Suzanne Scacca



(This is a sponsored post.) Errors, bugs and other issues are bound to arise in web development. Even if they aren’t outright errors, clients often have feedback about how something was designed, where it was placed or how certain elements work. It’s just part of the gig.

It can be a very painful part of the gig, too.

Take this scenario, for instance:

Email #1 from client: “I can’t see the button anymore. Can you please put it back on the home page?”

Email #2 from you: “Which button are you referring to? Can you send me a screenshot?”

You try to call the client, but get their voicemail instead.

Email #3 from the client: “The button to book a demo.”

You look at the attached screenshot and see that the Book a Demo section is intact, but the button doesn’t show. You pull up the website on Chrome and Safari and see it in both browsers: a big blue button that says “Schedule Demo”. You pull it up on your iPhone and see it there, too.

Email #4 from you: “Can you tell me which device and browser you’re seeing the issue on?”

Email #5 from client: “My phone.”

You know how this chain of messages will go and it’s only going to lead to frustration on both ends. Not to mention the cost to your business every time you have to pause from work to try to interpret a bug report and then work through it.

Then, there’s the cost of bugs to your clients you have to think about. When something goes wrong after launch and your client is actively trying to send traffic to the website, a bug could hurt their sales.

When that happens, who do you think they’re going to come after?

A Pain-Free Workflow For Issue Reporting And Repairs

It doesn’t matter what the size of the bug or issue is. When it’s detected and reported, it needs to be dealt with. There are a number of reasons why.

For starters, it’s the only way you’re going to get your client to sign off on a project as complete. Plus, swift and immediate resolution of bugs leads to better relations with your client who sees how invested you are in creating an impressive (and error-free) website for their business. And, of course, the more efficiently you resolve errors, the quicker you can get back to finishing this job and moving on to others!

So, here’s what you need to do to more effectively and painlessly tackle these issues.

  1. Assign Someone To Triage
  2. Use An Issue Resolution Workflow
  3. Give Your Users A Bug Reporting Tool
  4. Give Your Triage Manager A Tracking Platform
  5. Work In A Local Testing Platform
  6. Always Close The Loop

1. Assign Someone To Triage

The first thing to do is decide who’s going to triage issues.

If you work on your own, then that responsibility is yours to own. If you work on a team, it should go to a project manager or dev lead that can manage reported issues just as effectively as they would manage the team’s workload.

This person will then be in charge of:

  • Monitoring for reported issues.
  • Adding the bugs to the queue.
  • Ushering them through the resolution workflow.
  • Resolving and closing up bug reports.
  • Analyzing trends and revising your processes to reduce the likelihood that recurring bugs appear again.

Once you know who will manage the process, it’s time to design your workflow and build a series of tools around it.

2. Use An Issue Resolution Workflow

Your triage manager can’t do this alone. They’re going to need a process they can closely follow to take each issue from Point A (detection) to Point B (resolution).

To ensure you’ve covered every step, use a visualization tool like Lucidchart to lay out the steps or stages of your workflow.

Here’s an example of how your flow chart might look:

Lucidchart issue reporting workflow

An example of an issue reporting workflow built in Lucidchart. (Source: Lucidchart) (Large preview)

Let’s break it down:

You’ll start by identifying where the issue was detected and through which channel it was reported. This example doesn’t get too specific, but let’s say the new issue detected was the one mentioned before: the Book a Demo button is missing on the home page.

First steps issue detection

What happens when an issue is detected on a website. (Source: Lucidchart) (Large preview)

The next thing to do is to answer the question: “Who found it?” In most cases, this will be feedback submitted by your client from your bug-tracking software (more on that shortly).

Next, you’re going to get into the various stages your issues will go through:

Issue tracking tickets

An example of how to move tickets through an issue tracking system. (Source: Lucidchart) (Large preview)

This is the part of the process where the triage manager will determine how severe the issue of a missing Book a Demo button is (which is “Severe” since it will cost the client conversions). They’ll then pass it on to the developer to verify it.

Depending on how many developers or subject matter experts are available to resolve the issue, you might also want to break up this stage based on the type of bug (e.g. broken functionality vs. design updates).

Regardless, once the bug has been verified, and under what context (like if it were only on iPhone 7 or earlier), the ticket is moved to “In Progress”.

Finally, your flow chart should break out the subsequent steps for issues that can be resolved:

Issue resolution workflow sample

A sample workflow of how to resolve website issues and bugs. (Source: Lucidchart) (Large preview)

You can name these steps however you choose. In the example above, each step very specifically explains what needs to happen:

  • New Issue
  • In Progress
  • Test
  • Fix
  • Verify
  • Resolve
  • Close the Loop.

To simplify things, you could instead use a resolution flow like this:

  • New Issue
  • Todo
  • Doing
  • Done
  • Archive.

However you choose to set up your patch workflow, just make sure that the bug patch is tested and verified before you close up the ticket.

3. Give Your Users A Bug Reporting Tool

When it comes to choosing a bug reporting tool for your website, you want one that will make it easy for your team and clients to leave feedback and even easier for you to process it.

One such tool that does this well is called BugHerd.

Basically, BugHerd is a simple way for non-technical people to report issues to you visually and contextually. Since there’s no need to train users on how to get into the bug reporting tool or to use it, it’s one less thing you have to spend your time on in this process.

What’s more, BugHerd spares you the trouble of having to deal with the incessant back-and-forth that takes place when feedback is communicated verbally and out of context.

With BugHerd, though, users drop feedback onto the website just as easily as they’d leave a sticky note on your desk. What’s more, the feedback is pinned into place on the exact spot where the bug exists.

Let me show you how it works:

When you first add your client’s website to BugHerd (it’s the very first step), you’ll be asked to install the BugHerd browser extension. This is what allows BugHerd to pin the feedback bar to the website.

It looks like this:

BugHerd bug reporting tool

How the BugHerd sidebar appears to clients and team members with the extension installed. (Source: BugHerd) (Large preview)

This pinned feedback bar makes it incredibly easy for clients to leave feedback without actually altering the live website.

This is what the bug tracker pop-up looks like:

BugHerd error collection

BugHerd makes error collection from clients very easy. (Source: BugHerd) (Large preview)

As you can see, it’s a very simple form. And, really, all your clients need to do is select the element on the page that contains the bug, then enter the details. The rest can be populated by your triage manager.

As new feedback is added, comments are pinned to the page where they left it. For example:

BugHerd bug list

Review all reported bugs from the BugHerd sidebar. (Source: BugHerd) (Large preview)

You’ll also notice in the screenshot above that tasks that have been assigned a severity level are marked as such. They’re also listed from top-to-bottom on how critical they are.

On your side of things, you have a choice as to where you view your feedback. You can open the site and review the notes pinned to each page. Or you can go into the BugHerd app and review the comments from your Kanban board:

BugHerd bug dashboard

This is the BugHerd dashboard your developers and triage manager can use. (Source: BugHerd) (Large preview)

By default, all bugs enter the Backlog to start. It’s your triage manager’s job to populate each bug with missing details, assign to a developer and move it through the steps to resolution.

That said, BugHerd takes on a lot of the more tedious work of capturing bug reports for you. For example, when you click on any of the reported bugs in your kanban board, this “Task Details” sidebar will appear:

Bug details in BugHerd

A place to review all of the details about captured bugs. (Source: BugHerd) (Large preview)

This panel provides extra details about the issue, shows a screenshot of where it exists on the site, and also lets you know who left the comment.

What’s more, BugHerd captures “Additional Info”:

BugHerd additional info

Click on ‘Additional info’ to reveal details about the browser, OS and code. (Source: BugHerd) (Large preview)

This way, you don’t have to worry about the client not providing you with the full context of the issue. These details tell you what device and browser they were on, how big the screen was and what color resolution they were viewing it through.

You also get a look at the code of the buggy element. If there’s something actually broken or improperly coded, you might be able to spot it from here.

All in all, BugHerd is a great tool to simplify how much everyone has to do from all sides and ensure each request is tackled in a timely manner.

4. Give Your Triage Manager A Tracking Platform

If you want to keep this workflow as simple as possible, you can use the BugHerd dashboard to track and manage your requests:

The BugHerd dashboard

An example of the BugHerd dashboard when it’s in use. (Source: BugHerd) (Large preview)

Your triage manager and dev team will probably want to use something to complement the bug reporting capabilities of BugHerd. But good luck asking your client to use a platform like Jira to help you manage bugs.

In that case, I’d recommend adding another tool to this workflow.

Luckily for you, BugHerd seamlessly integrates with issue tracking and helpdesk software like Jira, Zendesk and Basecamp, so you don’t have to worry about using multiple tools to manage different parts of the same process. Once the connection is made between your two platforms, any task created in BugHerd will automatically be copied to your issue resolution center.

Now, if there’s a tool your team is already using, but that BugHerd doesn’t directly integrate with, that’s okay. You can use Zapier to help you connect with even more platforms.

For example, this is how easy it is to instantly create a “zap” that copies new BugHerd tasks to your Trello cards. And it all takes place from within BugHerd!

BugHerd - Zapier integration

BugHerd helps users quickly integrate other apps like Zapier and Trello. (Source: BugHerd) (Large preview)

Once the connection is made, your triage manager can start working from the task management or issue tracking platform of its choosing. In this case, this is what happens when Zapier connects BugHerd and Trello:

New task in BugHerd

This is a new task that was just created in BugHerd. (Source: BugHerd) (Large preview)

This is a new task I just created in BugHerd. Within seconds, the card was placed into the exact Trello project and list that I configured the zap for:

BugHerd + Zapier + Trello

The BugHerd Zapier integration instantly copies new bug reports to Trello. (Source: Trello) (Large preview)

This will make your triage manager’s job much easier as they won’t be limited by the stages available in BugHerd while also still having all the same information readily at their fingertips.

5. Work In A Local Testing Platform

When bugs are reported, you don’t want to test and implement the assumed fixes on the live website. That’s too risky.

Instead, work on resolving issues from a local testing platform. This article has some great suggestions on local development tools for WordPress you can use for this.

These tools enable you to:

  • Quickly make a copy of your website.
  • Reproduce the bug with the same server conditions.
  • Test possible fixes until you find one that works.

Only then should you work on patching the bug on the website.

6. Always Close The Loop

Finally, it’s up to your triage manager to bring each issue to a formal close.

First, they should inform the client (or visitor) who originally reported the issue that it has been resolved. This kind of transparency and accountability will give your agency a more polished look while helping you build trust with clients who might be unnerved by discovering bugs in the first place.

Once things are closed client-side, the triage manager can then archive the bug report.

It shouldn’t end there though.

Like traditional project managers, a triage manager should regularly track trends as well as the overall severity of bugs found on their websites. The data might reveal that there’s a deeper issue at play. That way, your team can focus on resolving the underlying problem and stop spending so much time repairing the same kinds of bugs and issues.

Wrapping Up

Think about all of the ways in which issues and bugs may be reported: through a contact form, by email, over the phone, through chat or, worse, in a public forum like social media.

Now, think about all of the different people who might report these issues to you: your team, the client, a customer of your client, a person who randomly found it while looking at the website and so on.

There are just too many variables in this equation, which makes it easy to lose sight of open issues. Worse, when feedback comes through vague, subjective, or unable to account for without any context, it becomes too difficult to resolve issues completely or in a timely fashion.

With the right system of reporting, tracking and organizing feedback in place, though, you can bring order to this chaos and more effectively wipe out bugs found on your website.

Smashing Editorial
(ms, ra, yk, il)

Source: Smashing Magazine, A Pain-Free Workflow For Issue Reporting And Resolution

Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard

Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard

Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard

Anselm Hannemann



Editor’s note: Please note that this is the last Monthly Web Development Update in the series. You can still follow the Web Development Reading List on Anselm’s site at https://wdrl.info. Watch out for a new roundup post format next month here on Smashing Magazine. A big thank-you to Anselm for sharing his findings and his thoughts with us during the past four years.

Do we make our lives too complex, too busy, and too rich? More and more people working with digital technology realize over time that a simple craft and nature are very valuable. The constant hunt to do more and get more productive, with even leisure activities that are meant to help us refuel our energy turning into a competition, doesn’t seem to be a good idea, yet currently, this is a trend in our modern world. After work, we feel we need to do two hours of yoga and be able to master the most complex of poses, we need a hobby, binge-watch series on Netflix, and a lot more. That’s why this week I want to encourage you to embrace a basic lifestyle.

“To live a life in which one purely subsists on the airy cream puffs of ideas seems enviably privileged: the ability to make a living merely off of one’s thoughts, rather than manual or skilled labor.”

Nadia Eghbal in “Basic”

What does basic stand for? Keep it real, don’t constantly do extra hours, don’t try to pack your workday with even more tasks or find more techniques to make it more efficient. Don’t try to hack your productivity, your sleep, let alone your meditation, yoga, or other wellness and sports activities. Do what you need to do and enjoy the silence and doing nothing when you’re finished. Living a basic life is a virtue, and it becomes more relevant again as we have more money to spend on unnecessary goods and more technology that intercept our human, basic thoughts on things.

News

General

  • Chris Coyier asks the question if a website should work without JavaScript in 2019. It breaks down to a couple of thoughts that mainly conclude with progressive enhancement being more important than making a website work for users who actively turned off JavaScript.

Privacy

UI/UX

  • In our modern world, it’s easy to junk things up. We’re quick to add more questions to research surveys, more buttons to a digital interface, more burdens to people. Simple is hard.

Web Performance

  • So many users these days use the Internet with a battery-driven device. The WebKit team shares how web content can affect power usage and how to improve the performance of your web application and save battery.

Visualization of high impact on the battery when scrolling a page with complex rendering and video playback.

Scrolling a page with complex rendering and video playback has a significant impact on battery life. But how can you make your pages more power efficient? (Image credit)

JavaScript

Tooling

  • There’s a new tool in town if you want to have a status page for your web service: The great people from Oh Dear now also provide status pages.
  • Bastian Allgeier shares his thoughts on simplicity on the web, where we started, and where we are now. Call it nostalgic or not, the times when we simply uploaded a file via FTP and it was live on servers were easy days. Now with all the CI/CD tooling around, we have gotten many advantages in terms of security, version management, and testability. However, a simple solution looks different.

Accessibility

  • Adrian Roselli shares why we shouldn’t under-engineer text form fields and why the default CSS that comes with the browser usually isn’t enough. A pretty good summary of what’s possible, what’s necessary, and how to make forms better for everyone visiting our websites. It even includes high-contrast mode, dark mode, print styles, and internationalization.

Work & Life

Going Beyond…

—Anselm

Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 9/2019: Embracing Basic And Why Simple Is Hard

Collective #548

dreamt up by webguru in Uncategorized | Comments Off on Collective #548


C537_divi

Our Sponsor

The Ultimate WordPress Page Builder

You’ve never built a WordPress website like this before. Divi is more than just a WordPress theme, it’s a completely new website building platform that replaces the standard WordPress post editor with a vastly superior visual editor.

Try it


C548_simplicity

Simplicity (II)

Bastian Allgeier on the lessons learned while working on very old projects and how the post-build-process era brought dependency hell.

Read it






C548_caniemail

Can I email

A very useful site that offers support info on more than 50 HTML and CSS features tested across 25 email clients.

Check it out


C548_veoluz

VeoLuz

A generative art tool that lets you play with light in a way you never have before. By Jared Forsyth.

Check it out


C548_csscamera

CSS-Camera

Add depth using a 3D camera view to your web page with CSS3 3D transforms. By Mingyu Kim.

Check it out





C548_cables

Cables

In case you didn’t know about it: Cables is a tool for creating beautiful interactive content. With an easy to navigate interface and real time visuals, it allows for rapid prototyping and fast adjustments. Currently in public beta.

Check it out






C548_fullstackai

fullstack.ai

End-to-end machine learning project showing key aspects of developing and deploying real life ML driven application.

Check it out





Collective #548 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #548

A Re-Introduction To Destructuring Assignment

dreamt up by webguru in Uncategorized | Comments Off on A Re-Introduction To Destructuring Assignment

A Re-Introduction To Destructuring Assignment

A Re-Introduction To Destructuring Assignment

Laurie Barth



If you write JavaScript you’re likely familiar with ES2015 and all the new language standards that were introduced. One such standard that has seen incredible popularity is destructuring assignment. The ability to “dive into” an array or object and reference something inside of it more directly. It usually goes something like this.

const response = {
   status: 200
   data: {}
}

// instead of response.data we get...
const {data} = response //now data references the data object directly


const objectList = [ { key: 'value' }, { key: 'value' }, { key: 'value' } ]

// instead of objectList[0], objectList[1], etc we get...
const [obj, obj1, obj2] = objectList // now each object can be referenced directly

However, destructuring assignment is such a powerful piece of syntax that many developers, even those who have been using it since it was first released, forget some of the things it can do. In this post, we’ll go through five real-world examples for both object and array destructuring, sometimes both! And just for fun, I’ll include a wonky example I came across just the other day.

1. Nested Destructuring

Being able to access a top-level key inside an object, or the first element of an array is powerful, but it’s also somewhat limiting. It only removes one level of complexity and we still end up with a series of dots or [0] references to access what we’re really after.

As it turns out, destructuring can work beyond the top level. And there can be valid reasons for doing so. Take this example of an object response from an HTTP request. We want to go beyond the data object and access just the user. So long as we know the keys we’re looking for, that isn’t a problem.

const response = {
  status: 200,
  data: { 
    user: {
       name: 'Rachel', 
      title: 'Editor in Chief' 
    }, 
    account: {},
    company: 'Smashing Magazine' 
  }
}

const {data: {user}} = response // user is { name: 'Rachel', title: 'Editor in Chief'}

The same can be done with nested arrays. In this case, you don’t need to know the key since there is none. What you need to know is the position of what you’re looking for. You’ll need to provide a reference variable (or comma placeholder) for each element up and until the one you’re looking for (we’ll get to that later). The variable can be named anything since it is not attempting to match a value inside the array.

const smashingContributors = [['rachel', ['writer', 'editor', 'reader']], ['laurie', ['writer', 'reader']]]

const [[rachel, roles]] = smashingContributors
// rachel is 'rachel'
// roles is [ 'writer', 'editor', 'reader' ]

Keep in mind that these features should be used judiciously, as with any tool. Recognize your use case and the audience of your code base. Consider readability and ease of change down the road. For example, if you’re looking to access a subarray only, perhaps a map would be a better fit.

2. Object And Array Destructuring

Objects and arrays are common data structures. So common, in fact, that one often appears inside the other. Beyond nested destructuring, we can access nested properties even if they are in a different type of data structure than the external one we’re accessing.

Take this example of an array inside an object.

const organization = { 
    users: ['rachel', 'laurie', 'eric', 'suzanne'],
    name: 'Smashing Magazine',
    site: 'https://www.smashingmagazine.com/' 
}

const {users:[rachel]} = organization // rachel is 'rachel'

The opposite use case is also valid. An array of objects.

const users = [{name: 'rachel', title: 'editor'}, {name: 'laurie', title: 'contributor'}]

const [{name}] = users // name is 'rachel'

As it turns out, we have a bit of a problem in this example. We can only access the name of the first user; otherwise, we’ll attempt to use ‘name’ to reference two different strings, which is invalid. Our next destructuring scenario should sort this out.

3. Aliases

As we saw in the above example (when we have repeating keys inside different objects that we want to pull out), we can’t do so in the “typical” way. Variable names can’t repeat within the same scope (that’s the simplest way of explaining it, it’s obviously more complicated than that).

const users = [{name: 'rachel', title: 'editor'}, {name: 'laurie', title: 'contributor'}]

const [{name: rachel}, {name: laurie}] = users // rachel is 'rachel' and laurie is 'laurie'

Aliasing is only applicable to objects. That’s because arrays can use any variable name the developer chooses, instead of having to match an existing object key.

4. Default Values

Destructuring often assumes that the value it’s referencing is there, but what if it isn’t? It’s never pleasant to litter code with undefined values. That’s when default values come in handy.

Let’s look at how they work for objects.

const user = {name: 'Luke', organization: 'Acme Publishing'}
const {name='Brian', role='publisher'} = user
// name is Luke
// role is publisher

If the referenced key already has a value, the default is ignored. If the key does not exist in the object, then the default is used.

We can do something similar for arrays.

const roleCounts = [2]
const [editors = 1, contributors = 100] = roleCounts
// editors is 2
// contributors is 100

As with the objects example, if the value exists then the default is ignored. Looking at the above example you may notice that we’re destructuring more elements than exist in the array. What about destructuring fewer elements?

5. Ignoring Values

One of the best parts of destructuring is that it allows you to access values that are part of a larger data structure. This includes isolating those values and ignoring the rest of the content, if you so choose.

We actually saw an example of this earlier, but let’s isolate the concept we’re talking about.

const user = {name: 'Luke', organization: 'Acme Publishing'}
const {name} = user
// name is Luke

In this example, we never destructure organization and that’s perfectly ok. It’s still available for reference inside the user object, like so.

user.organization

For arrays, there are actually two ways to “ignore” elements. In the objects example we’re specifically referencing internal values by using the associated key name. When arrays are destructured, the variable name is assigned by position. Let’s start with ignoring elements at the end of the array.

const roleCounts = [2, 100, 100000]
const [editors, contributors] = roleCounts
// editors is 2
// contributors is 100

We destructure the first and second elements in the array and the rest are irrelevant. But how about later elements? If it’s position based, don’t we have to destructure each element up until we hit the one we want?

As it turns out, we do not. Instead, we use commas to imply the existence of those elements, but without reference variables they’re ignored.

const roleCounts = [2, 100, 100000]
const [, contributors, readers] = roleCounts
// contributors is 100
// readers is 100000

And we can do both at the same time. Skipping elements wherever we want by using the comma placeholder. And again, as with the object example, the “ignored” elements are still available for reference within the roleCounts array.

Wonky Example

The power and versatility of destructuring also means you can do some truly bizarre things. Whether they’ll come in handy or not is hard to say, but worth knowing it’s an option!

One such example is that you can use destructuring to make shallow copies.

const obj = {key: 'value', arr: [1,2,3,4]}
const {arr, arr: copy} = obj
// arr and copy are both [1,2,3,4]

Another thing destructuring can be used for is dereferencing.

const obj = {node: {example: 'thing'}}
const {node, node: {example}} = obj
// node is { example: 'thing' }
// example is 'thing'

As always, readability is of the utmost importance and all of these examples should be used judicially. But knowing all of your options helps you pick the best one.

Conclusion

JavaScript is full of complex objects and arrays. Whether it’s the response from an HTTP request, or static data sets, being able to access the embedded content efficiently is important. Using destructuring assignment is a great way to do that. It not only handles multiple levels of nesting, but it allows for focused access and provides defaults in the case of undefined references.

Even if you’ve used destructuring for years, there are so many details hidden in the spec. I hope that this article acted as a reminder of the tools the language gives you. Next time you’re writing code, maybe one of them will come in handy!

Smashing Editorial
(dm, yk, il)

Source: Smashing Magazine, A Re-Introduction To Destructuring Assignment

Moving Your JavaScript Development To Bash On Windows

dreamt up by webguru in Uncategorized | Comments Off on Moving Your JavaScript Development To Bash On Windows

Moving Your JavaScript Development To Bash On Windows

Moving Your JavaScript Development To Bash On Windows

Burke Holland



I’m one of those people who can’t live without their Bash terminal. This sole fact has made it difficult for me to do frontend work on Windows. I work at Microsoft and I’m on a Mac. It wasn’t until the new Surface hardware line came out a few years ago that I realized: I gotta have one of those.

So I got one. A Surface Book 2 running Windows 10 to be exact. I’m drafting this article on it right now. And what of my sweet, sweet Bash prompt? Well, I brought it along with me, of course.

In this article, I’m going to take an in-depth look at how new technology in Windows 10 enables you to run a full Linux terminal on Windows. I’ll also show you my amazing terminal setup (which was named “best ever” by “me”) and how you too can set up your very own Windows/Linux development machine.

If you’ve been craving some of that Surface hardware but can’t live without a Linux terminal, you’ve come to the right place.

Note: At the time of this writing, a lot of the items in this article will require you to use or switch to “preview” or “insiders” builds of various items, including Windows. Most of these things will be in the main Windows build at some point in the future.

Windows Subsystem For Linux (WSL)

The Windows Subsystem for Linux, or, “WSL” is what enables you to run Linux on Windows. But what exactly is this mad science?

The WSL, in its current incarnation, is a translation layer that converts Linux system calls into Windows system calls. Linux runs on top of the WSL. That means that in order to get Linux on Windows, you need to do three things:

  1. Enable the WSL,
  2. Install Linux,
  3. Always include three items in a list.

As it turns out, that translation layer is a tad on the slow side — kind of like me trying to remember if I need splice or slice. This is especially true when the WSL is reading and writing to the file system. That’s kind of a big problem for web developers since any proper npm install will copy thousands of files to your machine. I mean, I don’t know about you, but I’m not going to left-pad my own strings.

Version 2 of the WSL is a different story. It is considerably faster than the current version because it leverages a virtualization core in Windows instead of using the translation layer. When I say it’s “considerably faster”, I mean way, way faster. Like as fast as me Googling “splice vs slice”.

For that reason, I’m going to show how to install the WSL 2. At the time of writing, that is going to require you to be on the “Insider” build of Windows.

First things first: follow this short guide to enable the WSL on Windows 10 and check your Windows version number.

Once you have it installed, hit the Windows key and type “windows insider”. Then choose “Windows Insider Program Settings”.

Windows Insider Program settings menu option

(Large preview)

You’ll have a couple of different options as to which “ring” you want to be on. A lot of people I know are on the fast ring. I’m a cautious guy, though. When I was a kid I would go down the slide at the playground on my stomach holding on to the sides. Which is why I stay on the slow ring. I’ve been on it for several months now, and I find it to be no more disruptive or unstable than regular Windows.

It’s a good option if you want the WSL 2, but you don’t want to die on the slide.

Windows Insider settings screen showing “Slow” ring

(Large preview)

Next, you need to enable the “Virtual Machine Platform” feature in Windows, which is required by the WSL version 2. To get to this screen, press the Windows key and type “windows features”. Then select “Turn Windows Features on or off”. Select “Virtual Machine Platform”. The “Windows Subsystem for Linux” option should already be enabled.

The “Windows Features” screen with “Virtual Machine Platform” and “Windows Subsystem for Linux” highlighted

(Large preview)

Now that the WSL is enabled, you can install Linux. You do this, ironically enough, directly from the Windows Store. Only in 2019 would I suggest that you “install Linux from the Windows store”.

There are several different distributions to choose from, but Ubuntu is going to be the most supported across all the tools we’ll configure later on — including VS Code. All of the instructions that come from here on out with assume a Ubuntu install. If you install a different distro, all bets are off.

Search for “Ubuntu” from the Windows Store. There will be three to choose from: Ubuntu, Ubuntu 18.04, and Ubuntu 16.04. Ubuntu really likes that 04 minor version number, don’t they?

The “Ubuntu” item in the Windows Store

(Large preview)

The “Ubuntu” distro (the first one in this screenshot) is the “meta version”, or rather a placeholder that just points to the latest version. As of right now, that’s 18.04.

I went with the meta version because later on I’ll show you how to browse the Linux file system with Windows Explorer and it’s kinda messy to have “Ubuntu 18.04” as a drive name vs just “Ubuntu”.

This install is pretty quick depending on your internet connection. It’s only about 215 megabytes, but I am on a gigabit connection over here and how do you know if someone is on a gigabit connection? Don’t worry, they’ll tell you.

Once installed, you’ll now have an “Ubuntu” app in your start menu.

Ubuntu installed and showing up in the Windows Start menu

(Large preview)

If you click on that, you’ll get a Bash terminal!

The Ubuntu terminal running on Windows

(Large preview)

Take a moment to bask in the miracle of technology.

By default, you’ll be running in the WSL version 1. To upgrade to version 2, you’ll need to open a PowerShell terminal and run a command.

Hit the “Windows” key and type “Powershell”.

The “Powershell” item in the start menu

(Large preview)

From the PowerShell terminal, you can see which version of the WSL you have by executing wsl --list --versbose.

Doing a verbose list of all WSL instances running from within Powershell

(Large preview)

If you’re showing version 1, you’ll need to execute the --set-version command and specify the name of the instance (Ubuntu) and the version you want (2).

wsl --set-version Ubuntu 2

Setting the version of WSL to version 2 with Powershell

(Large preview)

This is going to take a bit, depending on how much meat your machine has. Mine took “some minutes” give or take. When it’s done, you’ll be on the latest and greatest version of the WSL.

The Is Your Brain On Linux… On Windows.

Linux is not Windows. WSL is not a bash prompt on top of a Windows operating system. It is a full operating system unto itself with its own folder structure and installed applications. If you install Node with the Windows installer, typing node in Linux is going to fail because Node is not installed in Linux. It’s installed on Windows.

The true magic of the WSL, though, lies in the way it seamlessly connects Windows and Linux so that they appear as one file system on your machine.

File And Folder Navigation

By default, the Ubuntu terminal drops you into your Linux home directory (or /home/your-user-name). You can move onto the Windows side by going to /mnt/c.

The Ubuntu terminal with the contents for the C drive listed out

(Large preview)

Notice that some permissions are denied here. I would have to right-click the Ubuntu icon and click “Run as Administrator” to get access to these files. This how Windows does elevated permissions. There is no sudo on Windows.

Launching Applications

You can launch any Windows application from the Ubuntu terminal. For instance, I can open Windows Explorer from the Unbuntu terminal.

The Windows Explorer and the the Ubuntu terminal

(Large preview)

This also works in reverse. You can execute any application installed on the Linux side. Here I am executing “fortune” installed in Linux from the Windows command line. (Because it ain’t a proper Linux install without random, meaningless fortunes.)

The Windows Command Line executing the Linux “fortune” program

(Large preview)

Two different operating systems. Two different file systems. Two different sets of installed applications. See how this could get confusing?

In order to keep everything straight, I recommend that you keep all your JavaScript development files and tools installed on the Linux side of things. That said, the ability to move between Windows and Linux and access files from both systems is the core magic of the WSL. Don’t forget it, cause it’s what makes this whole setup better than just a standard Linux box.

Setting Up Your Development Environment

From here on out, I’m going to give you a list of opinionated items for what I think makes a killer Linux on Windows setup. Just remember: my opinions are just that. Opinions. It just happens that just like all my opinions, they are 100% correct.

Getting A Better Terminal

Yes, you got a terminal when you installed Ubuntu. It’s actually the Windows Console connected to your Linux distro. It’s not a bad console. You can resize it, turn on copy/paste (in settings). But you can’t do things like tabs or open new windows. Just like a lot of people use replacement terminal programs on Mac (I use Hyper), there are other options for Windows as well. The Awesome WSL list on Github contains a pretty exhaustive list.

Those are all fine emulators, but there is a new option that is built by people who know Windows pretty well.

Microsoft has been working on a new application called “Windows Terminal”.

The Windows Terminal item  in the Windows Store

(Large preview)

Windows Terminal can be installed from the Windows Store and is currently in preview mode. I’ve been using it for quite a while now, and it has enough features and is stable enough for me to give it a full-throated endorsement.

The new Windows Terminal features a full tab interface, copy/paste, multiple profiles, transparent backgrounds, background images — even transparent background images. It’s a field day if you like to customize your terminal, and I came to win this sack race.

Here is my current terminal. We’ll take a walk through some of the important tweaks here.

The author’s current terminal: Dark blue background with a cartoon planet in the bottom right-hand corner. Green and white text.

(Large preview)

Windows terminal is quite customizable. Clicking the “” arrow at the top left (next to the “+” sign) gives you access to “Settings”. This will open a JSON file.

Bind Copy/Paste

At the top of the file are all of the key bindings. The first thing that I did was map “copy” to Ctrl + C and paste to Ctrl + V. How else am I going to copy and paste in commands from Stack Overflow that I don’t understand?

{
  "command": "copy",
  "keys": ["ctrl+c"]
},
{
  "command": "paste",
  "keys": ["ctrl+v"]
},

The problem is that Ctrl + C is already mapped to SIGINT, or the Interrupt/kill command on Linux. There are a lot of terminals out there for Windows that handle this by mapping Copy/Paste to Ctrl + Shift + C and Ctrl + Shift + V respectively. The problem is that copy/paste is Ctrl + C / Ctrl + V every other single place in Windows. I just kept pressing Ctrl + C in the terminal over and over again trying to copy things. I could not stop doing it.

The Windows terminal handles this differently. If you have text highlighted and you press Ctrl + C, it will copy the text. If there is a running process, it still sends the SIGINT command down and interrupts it. The means that you can safely map Ctrl + C / Ctrl + V to Copy/Paste in the Windows Terminal and it won’t interfere with your ability to interrupt processes.

Whoever thought Copy/Paste could cause so much heartache?

Change The Default Profile

The default profile is what comes up when a new tab is opened. By default, that’s Powershell. You’ll want to scroll down and find the Linux profile. This is the one that opens wsl.exe -d Ubuntu. Copy its GUID and paste it into the defaultProfile setting.

I’ve moved these two settings so they are right next to each other to make it easier to see:

The default Terminal profile highlighted in the settings.json file

(Large preview)

Set The Background

I like my background to be a dark solid color with a flat-ish logo in the right-hand corner. I do this because I want the logo to be bright and visible, but not in the way of the text. This one I made myself, but there is a great collection of flat images to pick from at Simple Desktops.

The background is set with the backgroundImage property:

"backgroundImage": "c:/Users/YourUserName/Pictures/earth.png"

A blue sqaure image with a cartoon planet in the bottom right-hand corner

(Large preview)

You’ll also notice a setting called “acrylic”. This is what enables you to adjust the opacity of the background. If you have a solid background color, this is pretty straightforward.

"background": "#336699",
"useAcrylic": true,
"acrylicOpacity": 0.5

The terminal with the background slightly transparent

(Large preview)

You can pull this off with a background image as well, by combining the arcylicOpacity setting with the backgroundImageOpacity:

"backgroundImage": "c:/Users/username/Pictures/earth-and-stars.png",
"useAcrylic": true,
"acrylicOpacity": 0.5

The terminal with both a transparent image and a trasparent background

(Large preview)

For my theme, transparency makes everything look muted, so I keep the useAcrylic set to false.

Change The Font

The team building the Windows Terminal is also working on a new font called “Cascadia Code”. It’s not available as of the time of this writing, so you get the default Windows font instead.

The default font in the Windows Terminal is “Consolas”. This is the same font that the Windows command line uses. If you want that true Ubuntu feel, Chris Hoffman points out how you can install the official Ubuntu Mono font.

Here’s a before and after so you can see the difference:

"fontFace": "Ubuntu Mono"

A side-by-side comparison of Consolas and Unbuntu Mono fonts in the terminal

(Large preview)

They look pretty similar; the main difference being in the spacing of Ubuntu Mono which makes the terminal just a bit tighter and cleaner.

Color Schemes

The color schemes are all located at the bottom of the settings file. I copied the “Campbell” color scheme as a baseline. I try to match colors with their names, but I’m not afraid to go rogue either. I’ll map “#ffffff” to “blue” — I don’t even care.

The color scheme settings from the settings.json file

(Large preview)

If you like this particular scheme which I’ve named “Earth”, I’ve put together this gist so you don’t have to manually copy all of this mess out of a screenshot.

Note: The color previews come by virtue of the “Color Highlight” extension for VS Code.

Change The Default Starting Directory

By default, the WSL profile drops you into your home directory on the Windows side. Based on the setup that I am recommending in this article, it would be preferable to be dropped into your Linux home folder instead. To do that, alter the startingDirectory setting in your “Ubuntu” profile:

"startingDirectory": "\wsl$Ubuntuhomeburkeholland"

Note the path there. You can use this path (minus the extra escape slashes) to access the WSL from the Windows command line.

A “dir” command run against the Linux home directory from the Windows Command Line

(Large preview)

Install Zsh/Oh-My-Zsh

If you’ve never used Zsh and Oh-My-Zsh before, you’re in for a real treat. Zsh (or “Z Shell”) is a replacement shell for Linux. It expands on the basic capabilities of Bash, including implied directory switching (no need to type cd), better-theming support, better prompts, and much more.

To install Zsh, grab it with the apt package manager, which comes out of the box with your Linux install:

sudo apt install zsh

Install oh-my-zsh using curl. Oh-my-zsh is a set of configurations for zsh that improve the shell experience even further with plugins, themes and a myriad of keyboard shortcuts.

sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

Then it will ask you if you want to change your default shell to Zsh. You do, so answer in the affirmative and you are now up and running with Zsh and Oh-My-Zsh.

The terminal asking if you would like to change the default shell

(Large preview)

You’ll notice that the prompt is a lot cleaner now. You can change the look of that prompt by changing the theme in the ~/.zshrc file.

Open it with nano, which is kind of like VIM, but you can edit things and exit when you need to.

nano ~/.zshrc

Change the line that sets the theme. There is a URL above it with an entire list of themes. I think the “cloud” one is nice. And cute.

The “cloud” theme being set in the zshrc file

(Large preview)

To get changes to the .zshrc picked up, you’ll need to source it:

source ~/.zshrc

The “cloud” theme prompt

(Large preview)

Note: If you pick a theme like “agnoster” which requires glyphs, you’ll need a powerline infused version of Ubuntu Mono that has… glyphs. Otherwise, your terminal will just be full of weird characters like you mashed your face on the keyboard. Nerd Fonts offers one that seems to work pretty well.

Now you can do things like changing directories just by entering the directory name. No cd required. Wanna go back up a directory? Just do a ... You don’t even have to type the whole directory name, just type the first few letters and hit tab. Zsh will give you a list of all of the files/directories that match your search and you can tab through them.

The terminal with one of many paths highlighted

(Large preview)

Installing Node

As a web developer, you’re probably going to want to install Node. I suppose you don’t have to install Node to do web development, but it sure feels like it in 2019!

Your first instinct might be to install node with apt, which you can do, but you would regret it for two reasons:

  1. The version of Node on apt is dolorously out of date;
  2. You should install Node with a version manager so that you don’t run into permissions issues.

The best way to solve both of these issues is to install nvm (Node Version Manager). Since you’ve installed zsh, you can just add the nvm plugin in your zshrc file and zsh takes care of the rest.

First, install the plugin by cloning in the zsh-nvm repo. (Don’t worry, Git comes standard on your Ubuntu install.)

git clone https://github.com/lukechilds/zsh-nvm ~/.oh-my-zsh/custom/plugins/zsh-nvm

Then add it as a plugin in the ~/.zshrc file.

`nano ~/.zshrc`

plugins (zsh-nvm git)

The zshrc file with the zsh-vnm-plugin added

(Large preview)

Remember to source the zshrc file again with source ~/.zshrc and you’ll see nvm being installed.

The terminal showing the install progress of nvm

(Large preview)

Now you can install node with nvm. It makes it easy to install multiple side-by-side versions of node, and switch between them effortlessly. Also, no permissions errors when you do global npm installs!

nvm install --lts

I recommend this over the standard nvm install because the plugin gives you the ability to easily upgrade nvm. This is kind of a pain with the standard “curl” install. It’s one command with the plugin.

nvm upgrade

Utilize Auto Suggestions

One of my very favorite plugins for zsh is zsh-autosuggestions. It remembers things you have typed in the terminal before, and then recognizes them when you start to type them again as well as “auto-suggests” the line you might need. This plugin has come in handy more times than I can remember — specifically when it comes to long CLI commands that I have used in the past, but can’t ever remember.

Clone the repo into the zsh extensions folder:

git clone https://github.com/zsh-users/zsh-autosuggestions ~/.oh-my-zsh/custom/plugins/zsh-autosuggestions

Then add it to your zsh plugins and source the zshrc file:

nano ~/.zshrc

# In the .zshrc file
plugins(zsh-nvm zsh-autosuggestions git)

source ~/.zshrc

The plugin reads your zsh history, so start typing some command you’ve typed before and watch the magic. Try typing the first part of that long clone command above.

The terminal showing zsh autosuggestions auto completing a git clone command

(Large preview)

If you hit , it will autocomplete the command. If you keep hitting , it will cycle through any of the commands in your history that could be a match.

Important Keyboard Shortcuts

There are a few terminal shortcuts that I use all the time. I find this with all of my tools — including VS Code. Trying to learn all the shortcuts is a waste of time because you won’t use them enough to remember them.

Here are a few that I use regularly:

Terminal Shortcut What does it do?
Ctrl + L This clears the terminal and puts you back to the top. It’s the equivilant of typing “clear”.
Ctrl + U This clears out the current line only.
Ctrl + A Sends the cursor to the beginning of the command line.
Ctrl + E Move to the end of the line.
Ctrl + K Delete all the characters after the cursor.

That’s it! Everything else I’ve probably learned and then forgotten because it never gets any use.

Configuring Git(Hub/Lab/Whatevs)

Git comes on Ubuntu, so there is no install required. You can follow the instructions at your source control hoster of choice to get your ssh keys created and working.

Note that in the Github instructions, it tells you to use the “copy” utility to copy your ssh key. Ubuntu has the “xcopy” command, but it’s not going to work here because there is no interop between the Linux and Windows in terms of a clipboard.

Instead, you can just use the Windows Clipboard executable and call it directly from the terminal. You need to get the text first with cat, and then pipe that to the Windows clipboard.

cat ~/.ssh/id_rsa.pub | clip.exe 

The Github docs tell you to make sure that the ssh-agent is running. It’s not. You’ll see this when you try and add your key to the agent:

The terminal showing that the ssh agent is not running

(Large preview)

You can start the agent, but the next time you reboot Windows or the WSL is stopped, you’ll have to start it again. This is because there is no initialization system in the WSL. There is no systemd or another process that starts all of your services when the WSL starts. WSL is still in preview, and the team is working on a solution for this.

In the meantime, believe it or not, there’s a zsh plugin for this, too. It’s called ssh-agent, and it comes installed with oh-my-zsh, so all you need to do is reference it in the .zshrc file.

zsh-nvm zsh-autosuggestions ssh-agent git

This will start the ssh-agent automatically if it’s not running the first time that you fire up the WSL. The downside is that it’s going to ask you for your passphrase every time WSL is started fresh. That means essentially anytime you reboot your computer.

The terminal prompting for the passphrase for the rsa key

(Large preview)

VS Code And The WSL

The WSL has no GUI, so you can’t install a visual tool like VS Code. That needs to be installed on the Windows side. This presents a problem because you have a program running on the Windows side accessing files on the Linux side, and this can result in all manor of quirks and “permission denied” issues. As a general rule of thumb, Microsoft recommends that you not alter files in the WSL side with Windows programs.

To resolve this, there is an extension for VS Code called “Remote WSL”. This extension is made by Microsoft, and allows you to develop within the WSL, but from inside of VS Code.

Once the extension is installed, you can attach VS Code directly to the Ubuntu side by opening the Command Palette (Ctrl + Shift + P) and select “Remote-WSL: New Window”.

VS Code with the “Remote WSL: New Window” command highlighted in the Command Palette

(Large preview)

This opens a new instance of VS Code that allows you to work as if you were fully on the Linux side of things. Doing “File/Open” browses the Ubuntu file system instead of the Windows one.

The VS Code “Open File” view

(Large preview)

The integrated terminal in VS Code opens your beautifully customized zsh setup. Everything “just works” like it should when you have the Remote WSL extension installed.

If you open code from your terminal with code ., VS Code will automatically detect that it was opened from the WSL, and will auto-attach the Remote WSL extension.

VS Code Extensions With Remote WSL

The Remote WSL extension for VS Code works by setting up a little server on the Linux side, and then connecting to that from VS Code on the Windows side. That being the case, the extensions that you have installed in VS Code won’t automatically show up when you open a project from the WSL.

For instance, I have a Vue project open in VS Code. Even though I have all of the right Vue extensions installed for syntax highlighting, formatting and the like, VS Code acts like it’s never seen a .vue file before.

A .vue file open in VS Code with no syntax highlighting

(Large preview)

All of the extensions that you have installed can be enabled in the WSL. Just find the extension that you want in the WSL, and click the “Install in WSL” button.

The Vetur VS Code extension landing page in VS Code

(Large preview)

All of the extensions installed in the WSL will show up in their own section in the Extensions Explorer view. If you have a lot of extensions, it could be slightly annoying to install each one individually. If you want to just install every extension you’ve got in the WSL, click the little cloud-download icon at the top of the ‘Local – Installed’ section.

The Extensions view in VS Code with the install all extensions in WSL icon highlighted

(Large preview)

How To Setup Your Dev Directories

This is already an opinionated article, so here’s one you didn’t ask for on how I think you should structure your projects on your file system.

I keep all my projects on the Linux side. I don’t put my projects in “My Documents” and then try and work with them from the WSL. My brain can’t handle that.

I create a folder called /dev that I put in the root of my /home folder in Linux. Inside that folder, I create another one that is the same name as my Github repo: /burkeholland. That folder is where all of my projects go — even the ones that aren’t pushed to Github.

If I clone a repo from a different Github account (e.g. “microsoft”), I’ll create a new folder in “dev” called /microsoft. I then clone the repo into a folder inside of that.

Basically, I’m mimicking the same structure as source control on my local machine. I find it far easier to reason about where projects are and what repos they are attached to just by virtue of their location. It’s simple, but it is highly effective at helping me keep everything organized. And I need all the help I can get.

The authors opinionated folder structure listed in the terminal

(Large preview)

Browsing Files From Windows Explorer

There are times when you need to get at a file in Linux from the Windows side. The beautiful thing about the WSL is that you can still do that.

One way is to access the WSL just like a mapped drive. Access it with a wsl$ directly from the explorer bar:

wsl$

The Windows Explorer the Ubuntu installation as a mounted directory

(Large preview)

You might do this for a number of different reasons. For instance, just today I needed a Chrome extension that isn’t in the web store. So I cloned the repo in WSL, then navigated to it as an “Unpacked Extension” and loaded it into Edge.

One thing that I do with some frequency in Linux is to open the directory that contains a file directly from the terminal. You can do this in the WSL, too, by directly calling explorer.exe. For instance, this command opens the current directory in Windows Explorer.

$ explorer.exe .
A GIF demonstrating the opening of Windows explorer on the current directory from the terminal

This command is a bit cumbersome though. On Linux, it’s just open .. We can make that same magic by creating an alias in the ~/.zshrc.

alias open="explorer.exe"

Docker

When I said all tooling should be on the Linux side, I meant that. That includes Docker.

This is where the rubber really starts to meet the road. What we need here is Docker, running inside of Linux running inside of Windows. It’s a bit of a Russian Nesting Doll when you write it down in a blog post. In reality, it’s pretty straightforward.

You’ll need the correct version of Docker for Windows. As of the time of this writing, that’s the WSL 2 Tech Preview.

When you run the installer, it will ask you if you want to use Windows containers instead of Linux containers. You definitely do. Otherwise, you won’t get the option to run Docker in the WSL.

The Docker Installation screen with “Use Windows Containers” option selected

(Large preview)

You can now enable Docker in the WSL by clicking on the item in the system tray and selecting “WSL 2 Tech Preview”:

The WSL2 Tech Preview Option in the Docker Daemon context menu

(Large preview)

After you start the service, you can use Docker within the WSL just as you would expect to be able to. Running Docker in the WSL provides a pretty big performance boost, as well as a boost in cold start time on containers.

Might I also recommend that you install the Docker extension for VS Code? It puts a visual interface on your Docker setup and generally just makes it easier to work with Docker because you don’t have to remember all those command-line flags and options.

Get More Bash On Windows

At this point, you should get the idea about how to put Bash on Windows, and how it works once you get it there. You can customize your terminal endlessly and there are all sorts of rad programs that you can add in to do things like automatically set PATH variables, create aliases, get an ASCII cow in your terminal, and much more.

Running Bash on Windows opened up an entirely new universe for me. I’m able to combine Windows which I love for the productivity side, and Linux which I depend on as a developer. Best of all, I can build apps for both platforms now with one machine.

Further Reading

You can read more about Bash on Windows over here:

Special thanks to Brian Ketelsen, Matt Hernandez, Rich Turner, and Craig Loewen for their patience, help, and guidance with this article.

Smashing Editorial
(rb, dm, il)

Source: Smashing Magazine, Moving Your JavaScript Development To Bash On Windows