Collective #408

Inspirational Website of the Week: Charles Simon Smooth transitions and a unique design made us pick “Charles Simon” as inspirational website this week. Get inspired Our Sponsor Everything IT requires and Developers love HelloSign API’s robust SDK, amazing support, detailed documentation, and super clean Read more

Collective #408

dreamt up by webguru in Uncategorized | Comments Off on Collective #408





















Collective #408 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #408

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

dreamt up by webguru in Uncategorized | Comments Off on How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Oleh Mryhlod



React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let’s get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let’s look at the project’s structure:

Large preview
  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it’s important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user’s permission for audio recording, which entails access to the microphone. Let’s use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:

Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’],
audioItems: {
 ‘id1’: {
    id: string,
    title: string,
    recordDate: date string,
    duration: number,
    audioUrl: string,
 }
},

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let’s write the handler setAudioMode with Recompose.

withHandlers({
 setAudioMode: () => async ({ allowsRecordingIOS }) => {
   try {
     await Audio.setAudioModeAsync({
       allowsRecordingIOS,
       interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
       playsInSilentModeIOS: true,
       shouldDuckAndroid: true,
       interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
     });
   } catch (error) {
     console.log(error) // eslint-disable-line
   }
 },
}),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers({
    recording: null,
    isRecording: false,
    durationMillis: 0,
    isDoneRecording: false,
    fileUrl: null,
    audioName: '',
  }, {
    setState: () => obj => obj,
    setAudioName: () => audioName => ({ audioName }),
   recordingCallback: () => ({ durationMillis, isRecording, isDoneRecording }) =>
      ({ durationMillis, isRecording, isDoneRecording }),
  }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => {
      try {
        if (props.recording) {
          props.recording.setOnRecordingStatusUpdate(null);
          props.setState({ recording: null });
        }

        await props.setAudioMode({ allowsRecordingIOS: true });

        const recording = new Audio.Recording();
        recording.setOnRecordingStatusUpdate(props.recordingCallback);
        recording.setProgressUpdateInterval(200);

        props.setState({ fileUrl: null });

await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY);
        await recording.startAsync();

        props.setState({ recording });
      } catch (error) {
        console.log(error) // eslint-disable-line
      }
    },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => {
      try {
        await props.recording.stopAndUnloadAsync();
        await props.setAudioMode({ allowsRecordingIOS: false });
      } catch (error) {
        console.log(error); // eslint-disable-line
      }

      if (props.recording) {
        const fileUrl = props.recording.getURI();
        props.recording.setOnRecordingStatusUpdate(null);
        props.setState({ recording: null, fileUrl });
      }
    },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => {
      if (props.audioName && props.fileUrl) {
        const audioItem = {
          id: uuid(),
          recordDate: moment().format(),
          title: props.audioName,
          audioUrl: props.fileUrl,
          duration: props.durationMillis,
        };

        props.addAudio(audioItem);
        props.setState({
          audioName: '',
          isDoneRecording: false,
        });

        props.navigation.navigate(screens.LibraryTab);
      }
    },
(Large preview)

Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:

Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let’s write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let’s customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle({
    async componentDidMount() {
      await Audio.setAudioModeAsync({
        allowsRecordingIOS: false,
        interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
        playsInSilentModeIOS: true,
        shouldDuckAndroid: true,
        interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
      });
    },
  }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form { uri: 'http://path/to/file' } with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers({
    playbackInstance: null,
    isSeeking: false,
    shouldPlayAtEndOfSeek: false,
    playingAudio: null,
  }, 'setClassVariable'),
  withStateHandlers({
    position: null,
    duration: null,
    shouldPlay: false,
    isLoading: true,
    isPlaying: false,
    isBuffering: false,
    showPlayer: false,
  }, {
    setState: () => obj => obj,
  }),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers({
    soundCallback: props => (status) => {
      if (status.didJustFinish) {
        props.playbackInstance().stopAsync();
      } else if (status.isLoaded) {
        const position = props.isSeeking()
          ? props.position
          : status.positionMillis;
        const isPlaying = (props.isSeeking() || status.isBuffering)
          ? props.isPlaying
          : status.isPlaying;
        props.setState({
          position,
          duration: status.durationMillis,
          shouldPlay: status.shouldPlay,
          isPlaying,
          isBuffering: status.isBuffering,
        });
      }
    },
  }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => {
      props.setState({ isLoading: true });

      if (props.playbackInstance() !== null) {
        await props.playbackInstance().unloadAsync();
        props.playbackInstance().setOnPlaybackStatusUpdate(null);
        props.setClassVariable({ playbackInstance: null });
      }
      const { sound } = await Audio.Sound.create(
        { uri: props.playingAudio().audioUrl },
        { shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 },
        props.soundCallback,
      );

      props.setClassVariable({ playbackInstance: sound });

      props.setState({ isLoading: false });
    },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let’s add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => {
      if (props.playbackInstance() !== null) {
        if (props.isPlaying) {
          props.playbackInstance().pauseAsync();
        } else {
          props.playbackInstance().playAsync();
        }
      }
    },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => {
      if (props.playbackInstance() !== null) {
        props.playbackInstance().stopAsync();

        props.setShowPlayer(false);
        props.setClassVariable({ playingAudio: null });
      }
    },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => {
      if (props.playbackInstance() !== null) {
        if (props.shouldPlayAtEndOfSeek) {
          await props.playbackInstance().playFromPositionAsync(value);
        } else {
          await props.playbackInstance().setPositionAsync(value);
        }
        props.setClassVariable({ isSeeking: false });
      }
    },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let’s proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let’s add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:

Large preview

Let’s write the video recording logic by using Recompose on the container screen

src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won’t be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let’s add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back,
toggleCameraType: state => () => ({
      cameraType: state.cameraType === Camera.Constants.Type.front
        ? Camera.Constants.Type.back
        : Camera.Constants.Type.front,
    }),

Let’s add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false,

setCameraReady: () => () => ({ isCameraReady: true }),

Let’s add the variable for saving the camera component reference and setter.

cameraRef: null,

setCameraRef: () => cameraRef => ({ cameraRef }),

Let’s pass these variables and handlers to the camera component.

<Camera
          type={cameraType}
          onCameraReady={setCameraReady}
          style={s.camera}
          ref={setCameraRef}
        />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[”], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640×480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let’s set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => {
      if (props.isCameraReady) {
        props.setState({ isRecording: true, fileUrl: null });
        props.setVideoDuration();
        props.cameraRef.recordAsync({ quality: '4:3' })
          .then((file) => {
            props.setState({ fileUrl: file.uri });
          });
      }
    },

During the video recording, we can’t receive the recording status as we have done for audio. That’s why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => {
      if (props.isRecording) {
        props.cameraRef.stopRecording();
        props.setState({ isRecording: false });
        clearInterval(props.interval);
      }
    },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:

Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video
        source={{ uri: videoUrl }}
        style={s.video}
        shouldPlay={isPlaying}
        resizeMode="contain"
        useNativeControls={isPlaying}
        onLoad={onLoad}
        onError={onError}
      />
  • source

    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.

  • resizeMode

    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.

  • shouldPlay

    This boolean describes whether the media is supposed to play.

  • useNativeControls

    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.

  • onLoad

    This function is called once the video has been loaded.

  • onError

    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = {
  isError: false,
  isLoading: false,
  isPlaying: false,
};

const enhance = compose(
  paramsToProps('videoUrl'),
  withStateHandlers({
    ...defaultState,
    isLoading: true,
  }, {
    onError: () => () => ({ ...defaultState, isError: true }),
    onLoad: () => () => defaultState,
   onTogglePlaying: ({ isPlaying }) => () => ({ ...defaultState, isPlaying: !isPlaying }),
  }),
);

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That’s why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

Smashing Editorial
(da, lf, ra, yk, al, il)

Source: Smashing Magazine, How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Which Podcasts Should Web Designers And Developers Be Listening To?

dreamt up by webguru in Uncategorized | Comments Off on Which Podcasts Should Web Designers And Developers Be Listening To?

Which Podcasts Should Web Designers And Developers Be Listening To?

Which Podcasts Should Web Designers And Developers Be Listening To?

Ricky Onsman



We asked the Smashing community what podcasts they listened to, aiming to compile a shortlist of current podcasts for web designers and developers. We had what can only be called a very strong response — both in number and in passion.

First, we winnowed out the podcasts that were on a broader theme (e.g. creativity, mentoring, leadership), on a narrower theme (e.g. on one specific WordPress theme) or on a completely different theme (e.g. car maintenance — I’m sure it was well-intentioned).

When we filtered out those that had produced no new content in the last three months or more (although then we did have to make some exceptions, as you’ll see), and ordered the rest according to how many times they were nominated, we had a graded shortlist of 55.

Agreed, that’s not a very short shortlist.

So, we broke it down into five more reasonably sized shortlists:

Obviously, it’s highly unlikely anyone could — or would want to — listen to every episode of every one of these podcasts. Still, we’re pretty sure that any web designer or developer will find a few podcasts in this lot that will suit their particular listening tastes.

A couple of caveats before we begin:

  • We don’t claim to be comprehensive. These lists are drawn from suggestions from readers (not all of which were included) plus our own recommendations.
  • The descriptions are drawn from reader comments, summaries provided by the podcast provider and our own comments. Podcast running times and frequency are, by and large, approximate. The reality is podcasts tend to vary in length, and rarely stick to their stated schedule.
  • We’ve listed each podcast once only, even though several could qualify for more than one list.
  • We’ve excluded most videocasts. This is just for listening (videos probably deserve their own article).

Podcasts For Web Developers

Syntax

SyntaxWes Bos and Scott Tolinski dive deep into web development topics, explaining how they work and talking about their own experiences. They cover from JavaScript frameworks like React, to the latest advancements in CSS to simplifying web tooling. 30-70 minutes. Weekly.

Developer Tea

Developer TeaA podcast for developers designed to fit inside your tea break, a highly-concentrated, short, frequent podcast specifically for developers who like to learn on their tea (and coffee) break. The Spec Network also produces Design Details. 10-30 minutes. Every two days.

Web Platform Podcast

Web Platform PodcastCovers the latest in browser features, standards, and the tools developers use to build for the web of today and beyond. Founded in 2014 by Erik Isaksen. Hosts Danny, Amal, Leon, and Justin are joined by a special guest to discuss the latest developments. 60 minutes. Weekly.

Devchat Podcasts

Devchat PodcastsFourteen podcasts with a range of hosts that each explore developments in a specific aspect of development or programming including Ruby, iOS, Angular, JavaScript, React, Rails, security, conference talks, and freelancing. 30-60 minutes. Weekly.

The Bike Shed

The Bike ShedHosts Derek Prior, Sean Griffin, Amanda Hill and guests discuss their development experience and challenges with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire at any particular moment. 30-45 minutes. Weekly.

NodeUp

NodeUpHosted by Rod Vagg and a series of occasional co-hosts, this podcast features lengthy discussions with guests and panels about Node.js and Node-related topics. 30-90 minutes. Weekly / Monthly.

.NET Rocks

.NET RocksCarl Franklin and Richard Campbell host an internet audio talk show for anyone interested in programming on the Microsoft .NET platform, including basic information, tutorials, product developments, guests, tips and tricks. 60 minutes. Twice a week.

Three Devs and a Maybe

Three Devs and a MaybeJoin Michael Budd, Fraser Hart, Lewis Cains, and Edd Mann as they discuss software development, frequently joined by a guest on the show’s topic, ranging from daily developer life, PHP, frameworks, testing, good software design and programming languages. 45-60 minutes. Weekly.

Weekly Dev Tips

Weekly Dev TipsHosted by experienced software architect, trainer, and entrepreneur Steve Smith, Weekly Dev Tips offers a variety of technical and career tips for software developers. Each tip is quick and to the point, describing a problem and one or more ways to solve that problem. 5-10 minutes. Weekly.

devMode.fm

devMode.fmDedicated to the tools, techniques, and technologies used in modern web development. Each episode, Andrew Welch and Patrick Harrington lead a cadre of hosts discussing the latest hotness, pet peeves, and the frontend development technologies we use. 60-90 minutes. Twice a week.

CodeNewbie

CodeNewbieStories from people on their coding journey. New episodes published every Monday. The most supportive community of programmers and people learning to code. Founded by Saron Yitbarek. 30-60 minutes. Weekly.

Front End Happy Hour

Front End Happy HourA podcast featuring panels of engineers from @Netflix, @Evernote, @Atlassian and @LinkedIn talking over drinks about all things Front End development. 45-60 minutes. Every two weeks.

Under the Radar

Under the RadarFrom development and design to marketing and support, Under the Radar is all about independent app development. Hosted by David Smith and Marco Arment. 30 minutes. Weekly.

Hanselminutes

HanselminutesScott Hanselman interviews movers and shakers in technology in this commute-time show. From Michio Kaku to Paul Lutus, Ward Cunningham to Kimberly Bryant, Hanselminutes is talk radio guaranteed not to waste your time. 30 minutes. Weekly.

Fixate on Code

Fixate on CodeSince October 2017, Larry Botha from South African design agency Fixate has been interviewing well known achievers in web design and development on how to help front end developers write better code. 30 minutes. Weekly.

Podcasts For Web Designers

99% Invisible

99% InvisibleDesign is everywhere in our lives, perhaps most importantly in the places where we’ve just stopped noticing. 99% Invisible is a weekly exploration of the process and power of design and architecture, from award winning producer Roman Mars. 20-45 minutes. Weekly.

Design Details

Design DetailsA show about the people who design our favorite products, hosted by Bryn Jackson and Brian Lovin. The Spec Network also produces Developer Tea. 60-90 minutes. Weekly.

Presentable

PresentableHost Jeffrey Veen brings over two decades of experience as a designer, developer, entrepreneur, and investor as he chats with guests about how we design and build the products that are shaping our digital future and how design is changing the world. 45-60 minutes. Weekly.

Responsive Web Design

Responsive Web DesignIn each episode, Karen McGrane and Ethan Marcotte (who coined the term “responsive web design”) interview the people who make responsive redesigns happen. 15-30 minutes. Weekly. (STOP PRESS: Karen and Ethan issued their final episode of this podcast on 26 March 2018.)

RWD Podcast

RWD PodcastHost Justin Avery explores new and emerging web technologies, chats with web industry leaders and digs into all aspects of responsive web design. 10-60 minutes. Weekly / Monthly.

UXPodcast

UXPodcastBusiness, technology and people in digital media. Moving the conversation beyond the traditional realm of User Experience. Hosted by Per Axbom and James Royal-Lawson from Sweden. 30-45 minutes. Every two weeks.

UXpod

UXpodA free-ranging set of discussions on matters of interest to people involved in user experience design, website design, and usability in general. Gerry Gaffney set this up to provide a platform for discussing topics of interest to UX practitioners. 30-45 minutes. Weekly / Monthly.

UX-radio

UX-radioA podcast about IA, UX and Design that features collaborative discussions with industry experts to inspire, educate and share resources with the community. Created by Lara Fedoroff and co-hosted with Chris Chandler. 30-45 minutes. Weekly / Monthly.

User Defenders

User DefendersHost Jason Ogle aims to highlight inspirational UX Designers leading the way in their craft, by diving deeper into who they are, and what makes them tick/successful, in order to inspire and equip those aspiring to do the same. 30-90 minutes. Weekly.

The Drunken UX Podcast

The Drunken UX PodcastOur hosts Michael Fienen and Aaron Hill look at issues facing websites and developers that impact the way we all use the web. “In the process, we’ll drink drinks, share thoughts, and hopefully make you laugh a little.” 60 minutes. Twice a week.

UI Breakfast Podcast

UI Breakfast PodcastJoin Jane Portman for conversations about UI/UX design, products, marketing, and so much more, with awesome guests who are industry experts ready to share actionable knowledge. 30-60 minutes. Weekly.

Efficiently Effective

Efficiently EffectiveSaskia Videler keeps us up to date with what’s happening in the field of UX and content strategy, aiming to help content experts, UX professionals and others create better digital experiences. 25-40 minutes. Monthly.

The Honest Designers Show

The Honest Designers ShowHosts Tom Ross, Ian Barnard, Dustin Lee and Lisa Glanz have each found success in their creative fields and are here to give struggling designers a completely honest, under the hood look at what it takes to flourish in the modern world. 30-60 minutes. Weekly.

Design Life

Design LifeA podcast about design and side projects for motivated creators. Femke von Schoonhoven and Charli Prangley (serial side project addicts) saw a gap in the market for a conversational show hosted by two females about design and issues young creatives face. 30-45 minutes. Weekly.

Layout FM

Layout FMA weekly podcast about design, technology, programming and everything else hosted by Kevin Clark and Rafael Conde. 60-90 minutes. Weekly.

Bread Time

Bread TimeGabriel Valdivia and Charlie Deets host this micro-podcast about design and technology, the impact of each on the other, and the impact of them both on all of us. 10-30 minutes. Weekly.

The Deeply Graphic DesignCast

The Deeply Graphic DesignCastEvery episode covers a new graphic design-related topic, and a few relevant tangents along the way. Wes McDowell and his co-hosts also answer listener-submitted questions in every episode. 60 minutes. Every two weeks.

Podcasts On The Web, The Internet, And Technology

The Big Web Show

The Big Web ShowVeteran web designer and industry standards champion Jeffrey Zeldman is joined by special guests to address topics like web publishing, art direction, content strategy, typography, web technology, and more. 60 minutes. Weekly.

ShopTalk

ShopTalkA podcast about front end web design, development and UX. Each week Chris Coyier and Dave Rupert are joined by a special guest to talk shop and answer listener submitted questions. 60 minutes. Weekly.

Boagworld

BoagworldPaul Boag and Marcus Lillington are joined by a variety of guests to discuss a range of web design related topics. Fun, informative and quintessentially British, with content for designers, developers and website owners, something for everybody. 60 minutes. Weekly.

The Changelog

The ChangelogConversations with the hackers, leaders, and innovators of open source. Hosts Adam Stacoviak and Jerod Santo do in-depth interviews with the best and brightest software engineers, hackers, leaders, and innovators. 60-90 minutes. Weekly.

Back to Front Show

Back to Front ShowTopics under discussion hosted by Keir Whitaker and Kieran Masterton include remote working, working in the web industry, productivity, hipster beards and much more. Released irregularly but always produced with passion. 30-60 minutes. Weekly / Monthly.

The Next Billion Seconds

The Next Billion SecondsThe coming “next billion seconds” are the most important in human history, as technology transforms the way we live and work. Mark Pesce talks to some of the brightest minds shaping our world. 30-60 minutes. Every two weeks.

Toolsday

ToolsdayHosted by Una Kravets and Chris Dhanaraj, Toolsday is about the latest in tech tools, tips, and tricks. 30 minutes. Weekly.

Reply All

Reply AllA podcast about the internet, often delving deeper into modern life. Hosted by PJ Vogt and Alex Goldman from US narrative podcasting company Gimlet Media. 30-60 minutes. Weekly.

CTRL+CLICK CAST

CTRL+CLICK CASTDiverse voices from industry leaders and innovators, who tackle everything from design, code and CMS, to culture and business challenges. Focused, topical discussions hosted by Lea Alcantara and Emily Lewis. 60 minutes. Every two weeks.

Modern Web

Modern WebExplores next generation frameworks, standards, and techniques. Hosted by Tracy Lee. Topics include EmberJS, ReactJS, AngularJS, ES2015, RxJS, functional reactive programming. 60 minutes. Weekly.

Relative Paths

Relative PathsA UK based podcast on “web development and stuff like that” for web industry types. Hosted by Mark Phoenix and Ben Hutchings. 60 minutes. Every two weeks.

Business Podcasts For Web Professionals

The Businessology Show

The Businessology ShowThe Businessology Show is a podcast about the business of design and the design of business, hosted by CPA/coach Jason Blumer. 30 minutes. Monthly.

CodePen Radio

CodePen RadioChris Coyier, Alex Vazquez, and Tim Sabat, the co-founders of CodePen, talk about the ins and outs of running a small web software business. The good, the bad, and the ugly. 30 minutes. Weekly.

BizCraft

BizCraftPodcast about the business side of web design, recorded live almost every two weeks. Your hosts are Carl Smith of nGen Works and Gene Crawford of UnmatchedStyle. 45-60 minutes. Every two weeks.

Podcasts That Don’t Have Recent Episodes (But Do Have Great Archives)

Design Review Podcast

Design Review PodcastNo chit-chat, just focused in-depth discussions about design topics that matter. Jonathan Shariat and Chris Liu are your hosts and bring to the table passion and years of experience. 30-60 minutes. Every two weeks. Last episode 26 November 2017.

Style Guide Podcast

Style Guide PodcastA small batch series of interviews (20 in total) on Style Guides, hosted by Anna Debenham and Brad Frost, with high profile designer guests. 45 minutes. Weekly. Last episode 19 November 2017.

True North

True NorthLooks to uncover the stories of everyday people creating and designing, and highlight the research and testing that drives innovation. Produced by Loop11. 15-60 minutes. Every two weeks. Last episode 18 October 2017

UIE.fm Master Feed

UIE.fm Master FeedGet all episodes from every show on the UIE network in this master feed: UIE Book Corner (with Adam Churchill) and The UIE Podcast (with Jared Spool) plus some archived older shows. 15-60 minutes. Weekly. Last episode 4 October 2017.

Let’s Make Mistakes

Let’s Make MistakesA podcast about design with your hosts, Mike Monteiro, Liam Campbell, Steph Monette, and Seven Morris, plus a range of guests who discuss good design, business and ethics. 45-60 minutes. Weekly / Monthly. Last episode 3 August 2017.

Motion and Meaning

Motion and MeaningA podcast about motion for digital designers brought to you by Val Head and Cennydd Bowles, covering everything from the basic principles of animation through to advanced tools and techniques. 30 minutes. Monthly. Last episode 13 December 2016.

The Web Ahead

The Web AheadConversations with world experts on changing technologies and future of the web. The Web Ahead is your shortcut to keeping up. Hosted by Jen Simmons. 60-100 minutes. Monthly. Last episode 30 June 2016.

Unfinished Business

Unfinished BusinessUK designer Andy Clarke and guests have plenty to talk about, mostly on and around web design, creative work and modern life. 60-90 minutes. Monthly. Last episode 28 June 2016. (STOP PRESS: A new episode was issued on 20 March 2018. Looks like it’s back in action.)

Dollars to Donuts

Dollars to DonutsA podcast where Steve Portigal talks with the people who lead user research in their organizations. 50-75 minutes. Irregular. Last episode 10 May 2016.

Any Other Good Ones Missing?

As we noted, there are probably many other good podcasts out there for web designers and developers. If we’ve missed your favorite, let us know about it in the comments, or in the original threads on Twitter or Facebook.

Smashing Editorial
(vf, ra, il)

Source: Smashing Magazine, Which Podcasts Should Web Designers And Developers Be Listening To?

How To Improve Your Design Process With Data-Based Personas

dreamt up by webguru in Uncategorized | Comments Off on How To Improve Your Design Process With Data-Based Personas

How To Improve Your Design Process With Data-Based Personas

How To Improve Your Design Process With Data-Based Personas

Tim Noetzel



Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

Hypothetical Personas

First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says “yes, that resonates” but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

Data-Driven Personas Interview Format

You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data
Description of the user including predictive demographics. The percentage of our current user base the persona represents.
Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona.
The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.

Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

Smashing Editorial
(rb, ra, yk, il)

Source: Smashing Magazine, How To Improve Your Design Process With Data-Based Personas

Collective #407

dreamt up by webguru in Uncategorized | Comments Off on Collective #407


C407_scroll

Scroll to the future

Anna Selezniova and Andy Barnov take us on a tour of the latest CSS and JavaScript features that make navigating around a single page smooth, beautiful and less resource-hungry.

Read it






C407_vuepress

VuePress

A Vue-powered static site generator with a markdown-centered project structure.

Check it out





C407_browser

Remote Browser

A library for controlling web browsers like Chrome and Firefox programmatically using JavaScript. Built on top of the Web Extensions API standard.

Check it out











Collective #407 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #407

Best Practices With CSS Grid Layout

dreamt up by webguru in Uncategorized | Comments Off on Best Practices With CSS Grid Layout

Best Practices With CSS Grid Layout

Best Practices With CSS Grid Layout

Rachel Andrew



An increasingly common question — now that people are using CSS Grid Layout in production — seems to be “What are the best practices?” The short answer to this question is to use the layout method as defined in the specification. The particular parts of the spec you choose to use, and indeed how you combine Grid with other layout methods such as Flexbox, is down to what works for the patterns you are trying to build and how you and your team want to work.

Looking deeper, I think perhaps this request for “best practices” perhaps indicates a lack of confidence in using a layout method that is very different from what came before. Perhaps a concern that we are using Grid for things it wasn’t designed for, or not using Grid when we should be. Maybe it comes down to worries about supporting older browsers, or in how Grid fits into our development workflow.

In this article, I’m going to try and cover some of the things that either could be described as best practices, and some things that you probably don’t need to worry about.

The Survey

To help inform this article, I wanted to find out how other people were using Grid Layout in production, what were the challenges they faced, what did they really enjoy about it? Were there common questions, problems or methods being used. To find out, I put together a quick survey, asking questions about how people were using Grid Layout, and in particular, what they most liked and what they found challenging.

In the article that follows, I’ll be referencing and directly quoting some of those responses. I’ll also be linking to lots of other resources, where you can find out more about the techniques described. As it turned out, there was far more than one article worth of interesting things to unpack in the survey responses. I’ll address some of the other things that came up in a future post.

Accessibility

If there is any part of the Grid specification that you need to take care when using, it is when using anything that could cause content re-ordering:

“Authors must use order and the grid-placement properties only for visual, not logical, reordering of content. Style sheets that use these features to perform logical reordering are non-conforming.”

Grid Specification: Re-ordering and Accessibility

This is not unique to Grid, however, the ability to rearrange content so easily in two dimensions makes it a bigger problem for Grid. However, if using any method that allows content re-ordering — be that Grid, Flexbox or even absolute positioning — you need to take care not to disconnect the visual experience from how the content is structured in the document. Screen readers (and people navigating around the document using a keyboard only) are going to be following the order of items in the source.

The places where you need to be particularly careful are when using flex-direction to reverse the order in Flexbox; the order property in Flexbox or Grid; any placement of Grid items using any method, if it moves items out of the logical order in the document; and using the dense packing mode of grid-auto-flow.

For more information on this issue, see the following resources:

Which Grid Layout Methods Should I Use?

”With so much choice in Grid, it was a challenge to stick to a consistent way of writing it (e.g. naming grid lines or not, defining grid-template-areas, fallbacks, media queries) so that it would be maintainable by the whole team.”

Michelle Barker working on wbsl.com

When you first take a look at Grid, it might seem overwhelming with so many different ways of creating a layout. Ultimately, however, it all comes down to things being positioned from one line of the grid to another. You have choices based on the of layout you are trying to achieve, as well as what works well for your team and the site you are building.

There is no right or wrong way. Below, I will pick up on some of the common themes of confusion. I’ve also already covered many other potential areas of confusion in a previous article “Grid Gotchas and Stumbling Blocks.”

Should I Use An Implicit Or Explicit Grid?

The grid you define with grid-template-columns and grid-template-rows is known as the Explicit Grid. The Explicit Grid enables the naming of lines on the Grid and also gives you the ability to target the end line of the grid with -1. You’ll choose an Explicit Grid to do either of these things and in general when you have a layout all designed and know exactly where your grid lines should go and the size of the tracks.

I use the Implicit Grid most often for row tracks. I want to define the columns but then rows will just be auto-sized and grow to contain the content. You can control the Implicit Grid to some extent with grid-auto-columns and grid-auto-rows, however, you have less control than if you are defining everything.

You need to decide whether you know exactly how much content you have and therefore the number of rows and columns — in which case you can create an Explicit Grid. If you do not know how much content you have, but simply want rows or columns created to hold whatever there is, you will use the Implicit Grid.

Nevertheless, it’s possible to combine the two. In the below CSS, I have defined three columns in the Explicit Grid and three rows, so the first three rows of content will be the following:

  • A track of at least 200px in height, but expanding to take content taller,
  • A track fixed at 400px in height,
  • A track of at least 300px in height (but expands).

Any further content will go into a row created in the Implicit Grid, and I am using the grid-auto-rows property to make those tracks at least 300px tall, expanding to auto.

.grid {  
  display: grid;  
  grid-template-columns: 1fr 3fr 1fr;  
  grid-template-rows: minmax(200px auto) 400px minmax(300px, auto);  
  grid-auto-rows: minmax(300px, auto);  
  grid-gap: 20px;  
}

A Flexible Grid With A Flexible Number Of Columns

By using Repeat Notation, autofill, and minmax you can create a pattern of as many tracks as will fit into a container, thus removing the need for Media Queries to some extent. This technique can be found in this video tutorial, and also demonstrated along with similar ideas in my recent article “Using Media Queries For Responsive Design In 2018.”

Choose this technique when you are happy for content to drop below earlier content when there is less space, and are happy to allow a lot of flexibility in sizing. You have specifically asked for your columns to display with a minimum size, and to auto fill.

There were a few comments in the survey that made me wonder if people were choosing this method when they really wanted a grid with a fixed number of columns. If you are ending up with an unpredictable number of columns at certain breakpoints, you might be better to set the number of columns — and redefine it with media queries as needed — rather than using auto-fill or auto-fit.

Which Method Of Track Sizing Should I Use?

I described track sizing in detail in my article “How Big Is That Box? Understanding Sizing In Grid Layout,” however, I often get questions as to which method of track sizing to use. Particularly, I get asked about the difference between percentage sizing and the fr unit.

If you simply use the fr unit as specced, then it differs from using a percentage because it distributes available space. If you place a larger item into a track then the way the fr until will work is to allow that track to take up more space and distribute what is left over.

.grid {
  display: grid;
  grid-template-columns: 1fr 1fr 1fr;
  grid-gap: 20px;
}

A three column layout, the first column is wider
The first column is wider as Grid has assigned it more space.

To cause the fr unit to distribute all of the space in the grid container you need to give it a minimum size of 0 using minmax().

.grid {
    display: grid;
    grid-template-columns: minmax(0,1fr) minmax(0,1fr) minmax(0,1fr);
    grid-gap: 20px;
}

three equal columns with the first overflowing
Forcing a 0 minimum may cause overflows

So you can choose to use fr in either of these scenarios: ones where you do want space distribution from a basis of auto (the default behavior), and those where you want equal distribution. I would typically use the fr unit as it then works out the sizing for you, and enables the use of fixed width tracks or gaps. The only time I use a percentage instead is when I am adding grid components to an existing layout that uses other layout methods too. If I want my grid components to line up with a float- or flex-based layout which is using percentages, using them in my grid layout means everything uses the same sizing method.

Auto-Place Items Or Set Their Position?

You will often find that you only need to place one or two items in your layout, and the rest fall into place based on content order. In fact, this is a really good test that you haven’t disconnected the source and visual display. If things pretty much drop into position based on auto-placement, then they are probably in a good order.

Once I have decided where everything goes, however, I do tend to assign a position to everything. This means that I don’t end up with strange things happening if someone adds something to the document and grid auto-places it somewhere unexpected, thus throwing out the layout. If everything is placed, Grid will put that item into the next available empty grid cell. That might not be exactly where you want it, but sat down at the end of your layout is probably better than popping into the middle and pushing other things around.

Which Positioning Method To Use?

When working with Grid Layout, ultimately everything comes down to placing items from one line to another. Everything else is essentially a helper for that.

Decide with your team if you want to name lines, use Grid Template Areas, or if you are going to use a combination of different types of layout. I find that I like to use Grid Template Areas for small components in particular. However, there is no right or wrong. Work out what is best for you.

Grid In Combination With Other Layout Mechanisms

Remember that Grid Layout isn’t the one true layout method to rule them all, it’s designed for a certain type of layout — namely two-dimensional layout. Other layout methods still exist and you should consider each pattern and what suits it best.

I think this is actually quite hard for those of us used to hacking around with layout methods to make them do something they were not really designed for. It is a really good time to take a step back, look at the layout methods for the tasks they were designed for, and remember to use them for those tasks.

In particular, no matter how often I write about Grid versus Flexbox, I will be asked which one people should use. There are many patterns where either layout method makes perfect sense and it really is up to you. No-one is going to shout at you for selecting Flexbox over Grid, or Grid over Flexbox.

In my own work, I tend to use Flexbox for components where I want the natural size of items to strongly control their layout, essentially pushing the other items around. I also often use Flexbox because I want alignment, given that the Box Alignment properties are only available to use in Flexbox and Grid. I might have a Flex container with one child item, in order that I can align that child.

A sign that perhaps Flexbox isn’t the layout method I should choose is when I start adding percentage widths to flex items and setting flex-grow to 0. The reason to add percentage widths to flex items is often because I’m trying to line them up in two dimensions (lining things up in two dimensions is exactly what Grid is for). However, try both, and see which seems to suit the content or design pattern best. You are unlikely to be causing any problems by doing so.

Nesting Grid And Flex Items

This also comes up a lot, and there is absolutely no problem with making a Grid Item also a Grid Container, thus nesting one grid inside another. You can do the same with Flexbox, making a Flex Item and Flex Container. You can also make a Grid Item and Flex Container or a Flex Item a Grid Container — none of these things are a problem!

What we can’t currently do is nest one grid inside another and have the nested grid use the grid tracks defined on the overall parent. This would be very useful and is what the subgrid proposals in Level 2 of the Grid Specification hope to solve. A nested grid currently becomes a new grid so you would need to be careful with sizing to ensure it aligns with any parent tracks.

You Can Have Many Grids On One Page

A comment popped up a few times in the survey which surprised me, there seems to be an idea that a grid should be confined to the main layout, and that many grids on one page were perhaps not a good thing. You can have as many grids as you like! Use grid for big things and small things, if it makes sense laid out as a grid then use Grid.

Fallbacks And Supporting Older Browsers

“Grid used in conjunction with @supports has enabled us to better control the number of layout variations we can expect to see. It has also worked really well with our progressive enhancement approach meaning we can reward those with modern browsers without preventing access to content to those not using the latest technology.”

Joe Lambert working on rareloop.com

In the survey, many people mentioned older browsers, however, there was a reasonably equal split between those who felt that supporting older browsers was hard and those who felt it was easy due to Feature Queries and the fact that Grid overrides other layout methods. I’ve written at length about the mechanics of creating these fallbacks in “Using CSS Grid: Supporting Browsers Without Grid.”

In general, modern browsers are far more interoperable than their earlier counterparts. We tend to see far fewer actual “browser bugs” and if you use HTML and CSS correctly, then you will generally find that what you see in one browser is the same as in another.

We do, of course, have situations in which one browser has not yet shipped support for a certain specification, or some parts of a specification. With Grid, we have been very fortunate in that browsers shipped Grid Layout in a very complete and interoperable way within a short time of each other. Therefore, our considerations for testing tend to be to need to test browsers with Grid and without Grid. You may also have chosen to use the -ms prefixed version in IE10 and IE11, which would then require testing as a third type of browser.

Browsers which support modern Grid Layout (not the IE version) also support Feature Queries. This means that you can test for Grid support before using it.

Testing Browsers That Don’t Support Grid

When using fallbacks for browsers without support for Grid Layout (or using the -ms prefixed version for IE10 and 11), you will want to test how those browsers render Grid Layout. To do this, you need a way to view your site in an example browser.

I would not take the approach of breaking your Feature Query by checking for support of something nonsensical, or misspelling the value grid. This approach will only work if your stylesheet is incredibly simple, and you have put absolutely everything to do with your Grid Layout inside the Feature Queries. This is a very fragile and time-consuming way to work, especially if you are extensively using Grid. In addition, an older browser will not just lack support for Grid Layout, there will be other CSS properties unsupported too. If you are looking for “best practice” then setting yourself up so you are in a good position to test your work is high up there!

There are a couple of straightforward ways to set yourself up with a proper method of testing your fallbacks. The easiest method — if you have a reasonably fast internet connection and don’t mind paying a subscription fee — is to use a service such as BrowserStack. This is a service that enables viewing of websites (even those in development on your computer) on a whole host of real browsers. BrowserStack does offer free accounts for open-source projects.


Screenshot of the download page
You can download Virtual Machines for testing from Microsoft.

To test locally, my suggestion would be to use a Virtual Machine with your target browser installed. Microsoft offers free Virtual Machine downloads with versions of IE back to IE8, and also Edge. You can also install onto the VM an older version of a browser with no Grid support at all. For example by getting a copy of Firefox 51 or below. After installing your elderly Firefox, be sure to turn off automatic updates as explained here as otherwise it will quietly update itself!

You can then test your site in IE11 and in non-supporting Firefox on one VM (a far less fragile solution than misspelling values). Getting set up might take you an hour or so, but you’ll then be in a really good place to test your fallbacks.

Unlearning Old Habits

“It was my first time to use Grid Layout, so there were a lot of concepts to learn and properties understand. Conceptually, I found the most difficult thing to unlearn all the stuff I had done for years, like clearing floats and packing everything in container divs.”

Hidde working on hiddedevries.nl/en

Many of the people responding to the survey mentioned the need to unlearn old habits and how learning Layout would be easier for people completely new to CSS. I tend to agree. When teaching people in person complete beginners have little problem using Grid while experienced developers try hard to return grid to a one-dimensional layout method. I’ve seen attempts at “grid systems” using CSS Grid which add back in the row wrappers needed for a float or flex-based grid.

Don’t be afraid to try out new techniques. If you have the ability to test in a few browsers and remain mindful of potential issues of accessibility, you really can’t go too far wrong. And, if you find a great way to create a certain pattern, let everyone else know about it. We are all new to using Grid in production, so there is certainly plenty to discover and share.

“Grid Layout is the most exciting CSS development since media queries. It’s been so well thought through for real-world developer needs and is an absolute joy to use in production – for designers and developers alike.”

Trys Mudford working on trysmudford.com

To wrap up, here is a very short list of current best practices! If you have discovered things that do or don’t work well in your own situation, add them to the comments.

  1. Be very aware of the possibility of content re-ordering. Check that you have not disconnected the visual display from the document order.
  2. Test using real target browsers with a local or remote Virtual Machine.
  3. Don’t forget that older layout methods are still valid and useful. Try different ways to achieve patterns. Don’t be hung up on having to use Grid.
  4. Know that as an experienced front-end developer you are likely to have a whole set of preconceptions about how layout works. Try to look at these new methods anew rather than forcing them back into old patterns.
  5. Keep trying things out. We’re all new to this. Test your work and share what you discover.
Smashing Editorial
(il)

Source: Smashing Magazine, Best Practices With CSS Grid Layout

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

dreamt up by webguru in Uncategorized | Comments Off on Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Anselm Hannemann



These days, it is one of the biggest challenges to think long-term. In a world where we live with devices that last only a few months or a few years maybe, where we buy stuff to throw it away only days or weeks later, the term ‘effort’ gains a new meaning.

Recently, I was reading an essay on ‘Yatnah’, ‘Effort’. I spent a lot of time outside in nature in the past weeks and created a small acre to grow some vegetables. I also attended a workshop to learn the craft of grafting fruit trees. When you cut a tree, you realize that our fast-living, short-term lifestyle is very different from how nature works. I grafted a tree that is supposed to grow for decades now, and if you cut a tree that has been there for forty years, it’ll take another forty to grow one that will be similarly tall.

I’d love that we all try to create more long-lasting work, software that works in a decade, and in order to do so, put effort into learning how we can make that happen. So long, I’ll leave you with this quote and a bunch of interesting articles.

“In our modern world it can be tempting to throw effort away and replace it with a few phrases of positive thinking. But there is just no substitute for practice”.

— Kino Macgregor

News

  • The Safari Technology Preview 52 removes support for all NPAPI plug-ins other than Adobe Flash and adds support for preconnect link headers.
  • Chrome 66 Beta brings the CSS Typed Object Model, Async Clipboard API, AudioWorklets, and support to use calc(), min(), and max() in Media Queries. Additionally, select and textarea fields now support the autocomplete attribute, and the catch clause of a try statement can be used without a parameter from now on.
  • iOS 11.3 is available to the public now, and, as already announced, the release brings support for Progressive Web Apps to iOS. Maximiliano Firtman shares what this means, what works and what doesn’t work (yet).
  • Safari 11.1 is now available for everyone. Here is a summary of all the new WebKit features it includes.
Progressive Web App on iOS
Progressive Web Apps for iOS are here. Full screen, offline capable, and even visible in the iPad’s dock. (Image credit)

General

  • Anil Dash reflects on what the web was intended to be and how today’s web differs from this: “At a time when millions are losing trust in the web’s biggest sites, it’s worth revisiting the idea that the web was supposed to be made out of countless little sites. Here’s a look at the neglected technologies that were supposed to make it possible.”
  • Morten Rand-Hendriksen wrote about using ethics in web design and what questions we should ask ourselves when suggesting a solution, creating a design, or a new feature. Especially when we think we’re making something ‘smart’, it’s important to put the question whether it actually helps people first.
  • A lot of protest and discussion came along with the Facebook / Cambridge Analytica affair, most of them pointing out the technological problems with Facebook’s permission model. But the crux lies in how Facebook designed their company and which ethical baseline they set. If we don’t want something like this to happen again, it’s upon us to design the service we want.
  • Brendan Dawes shares why he thinks URLs are a masterpiece and a user experience by themselves.
  • Charlie Owen’s talk transcription of “Dear Developer, The Web Isn’t About You” is a good summary of why we as developers need to think beyond what’s good for us and consider what serves the users and how we can achieve that instead.

UI/UX

  • B. Kaan Kavuştuk shares his thoughts about why we won’t be able to build a perfect design or codebase on the first try, no matter how much experience we have. Instead, it’s the constant small improvements that pave the way to perfection.
  • Trine Falbe introduces us to Ethical Design with a practical getting-started guide. It shows alternatives and things to think about when building a business or product. It doesn’t matter much if you’re the owner, a developer, a designer or a sales person, this is about serving users and setting the ground for real and sustainable trust.
  • Josh Lovejoy shares his learnings from working on inclusive tech solutions and why it takes more than good intention to create fair, inclusive technology. This article goes into depth of why human judgment is very difficult and often based on bias, and why it isn’t easy to design and develop algorithms that treat different people equally because of this.
  • The HSB (Hue, Saturation, Brightness) color system isn’t especially new, but a lot of people still don’t understand its advantages. Erik D. Kennedy explains its principles and advantages step-by-step.
  • While there’s more discussion about inclusive design these days, it’s often seen under the accessibility hat or as technical decisions. Robert del Prado now shares how important inclusive design thinking is and why it’s much more about the generic user than some specific people with specific disabilities. Inclusive design brings people together, regardless of who they are, where they live, and what they can afford. And isn’t it the goal of every product to be successful by acquiring as many people as possible? Maybe we need to discuss this with marketing people as well.
  • Anton Lovchikov shares ways to improve optical adjustments in components. It’s an interesting study on how very small changes can make quite a difference.
Fair Is Not The Default
Afraid or angry? Which emotion we think the baby is showing depends on whether we think it’s a girl or a boy. Josh Lovejoy explains how personal bias and judgments like this one lead to unfair products. (Image credit)

Tooling

  • Brian Schrader found an unknown feature in Git which is very helpful to test ideas quickly: Git Notes lets us add, remove, or read notes attached to objects, without touching the objects themselves and without needing to commit the current state.
  • For many projects, I prefer to use npm scripts over calling gulp or direct webpack tasks. Michael Kühnel shares some useful tricks for npm scripts, including how to allow CLI option parameters or how to watch tasks and alert notices on error.
  • Anton Sten explains why new tools don’t always equal productivity. We all love new design tools, and new ones such as Sketch, Figma, Xd, or Invision Studio keep popping up. But despite these tools solving a lot of common problems and making some things easier, productivity is mostly about what works for your problem and not what is newest. If you need to create a static mockup and Photoshop is what you know best, why not use it?
  • There’s a new, fast DNS service available by Cloudflare. Finally, a better alternative to the much used Google DNS servers, it is available under 1.1.1.1. The new DNS is the fastest, and probably one of the most secure ones, too, out there. Cloudflare put a lot of effort into encrypting the service and partnering up with Mozilla to make DNS over HTTPS work to close a big privacy gap that until now leaked all your browsing data to the DNS provider.
  • I heard a lot about iOS machine learning already, but despite the interesting fact that they’re able to do this on the device without sending everything to a cloud, I haven’t found out how to make use of this for apps, yet. Luckily, Manu Rink put together a nice guide in which she explains machine learning in iOS for beginners.
  • There’s great news for the Git GUI fans: Tower now offers a new beta version that includes pull request support, interactive rebase workflows, quick actions, reflog, and search. An amazing update that makes working with the software much faster than before, and even for me as a command line lover it’s a nice option.
Machine Learning In iOS For The Noob
Manu Rink shows how machine learning in iOS works by building an offline handwritten text recognition. (Image credit)

Security

Web Performance

Accessibility

CSS

  • Amber Wilson shares some insights into what it feels like to be thrown into a complex project in order to do the styling there. She rightly says that “nobody said CSS is easy” and expresses how important it is that we as developers face inconvenient situations in order to grow our knowledge.
  • Ana Tudor is known for her special CSS skills. Now she explores and describes how we can achieve scooped corners in CSS with some clever tricks.
Scooped Corners
Scooped corners? Ana Tudor shows how to do it. (Image credit)

JavaScript

  • WebKit got an upgrade for the Clipboard API, and the team gives some very interesting insights into how it works and how Safari will handle some of the common challenges with clipboard data (e.g. images).
  • If you work with key value stores that live only in the frontend, IDB-Keyval is a great lightweight library that simplifies working with IndexedDB and localStorage.
  • Ever wanted to create graphics from your data with a hand-drawn, sketchy look on a website? Rough.js lets you do just that. It’s usually Canvas-based (for better performance and less data) but can also draw SVG paths.
  • If you need a drag-and-drop reorder module, there’s a smooth and accessible solution available now: dragon-drop.
  • For many years, we could only get CSS values in their computed value and even that wasn’t flexible or nice to work with. But now CSS has a proper object-based API for working with values in JavaScript: the CSS Typed Object Model. It’s only available in the upcoming Chrome 66 yet but definitely a promising feature I’d love to use in my code soon.
  • The React.js documentation now has an extra section that explains how to easily and programmatically manage focus states to ensure your UI is accessible.
  • James Milner shares how we can use abortable fetch to cancel requests.
  • There are a few articles about Web Push Notifications out there already, but Oleksii Rudenko’s getting-started guide is a great primer that explains the principles very well.
  • In the past years, we got a lot of new features on the JavaScript platform. And since it’s hard to remember all the new stuff, Raja Rao DV summed up “Everything new in ECMAScript 2016, 2017, and 2018”.

Work & Life

  • To raise awareness for how common such situations are for all of us, James Bennett shares an embarrassing situation where he made a simple mistake that took him a long time to find out. It’s not just me making mistakes, it’s not just you, and not just James — all of us make mistakes, and as embarrassing as they seem to be in that particular situation, there’s nothing to feel bad about.
  • Adam Blanchard says “People are machines. We need maintenance, too.” and creates a comparison for engineers to understand why we need to take care of ourselves and also why we need people who take care of us. This is an insight into what People Engineers do, and why it’s so important for companies to hire such people to ensure a team is healthy.
  • If there’s one thing we don’t talk much about in the web industry, it’s retirement. Jan Chipchase now wrote a lot of interesting thoughts all about retirement.
  • Rebecca Downes shares some insights into her PhD on remote teams, revealing under which circumstances remote teams are great and under which they’re not.
What would people engineers do
People need maintenance, too. That’s where the People Engineer comes in. (Image credit)

Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, May 18th. Stay tuned.

Smashing Editorial
(cm)

Source: Smashing Magazine, Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Collective #406

dreamt up by webguru in Uncategorized | Comments Off on Collective #406






C406_glide

Glide 3.0

The carousel library Glide has evolved into a dependency-free and light-weight script. Made by Jędrzej Chałubek.

Check it out






C406_es6

ES6 Syntax and Feature Overview

An introduction to ES6 syntax and features, such as classes, Promises, constants, and destructuring, with one-to-one comparisons to older versions of JavaScript. By Tania Rascia.

Read it












Collective #406 was written by Pedro Botelho and published on Codrops.


Source: Codrops, Collective #406

Automating Your Feature Testing With Selenium WebDriver

dreamt up by webguru in Uncategorized | Comments Off on Automating Your Feature Testing With Selenium WebDriver

Automating Your Feature Testing With Selenium WebDriver

Automating Your Feature Testing With Selenium WebDriver

Nils Schütte



This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the “New” icon.
    Creating a new project in Eclipse
    Creating a new project in Eclipse
  3. Choose the wizard to create a new “Java Project,” and click “Next.”
    Chosing the java-project wizard
    Choose the java-project wizard.
  4. Give your project a name, and click “Finish.”
    Eclipse project wizard
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    Java project successfully created
    We successfully created a project to run the Selenium WebDriver.

Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.
    Downloading the Selenium library
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to “Project” → “Properties.”
    Eclipse Properties
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to “Java Build Path” and then to register “Libraries.”
  5. Click on “Add External JARs.”
    Adding the Selenium lib to your Java build path.
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click “Open.”
    Selecting the correct files of the Selenium lib.
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    Selenium WebDriver framework successfully integrated into your project
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the “New” button again (while you are in your new project’s folder).
    New class in eclipse
    Create a new class to run the Selenium WebDriver.
  2. Choose the “Class” wizard, and click “Next.”
    New class wizard in eclipse
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, “RunTest”), and click “Finish.”
    Eclipse Java Class wizard
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver;
    import org.openqa.selenium.chrome.ChromeDriver;
    
    /**
     * @author Nils Schuette via frontendtest.org
     */
    public class RunTest {
        static WebDriver webDriver;
        /**
         * @param args
         * @throws InterruptedException
         */
        public static void main(final String[] args) throws InterruptedException {
            // Telling the system where to find the chrome driver
            System.setProperty(
                    "webdriver.chrome.driver",
                    "C:/PATH/TO/chromedriver.exe");
    
            // Open the Chrome browser
            webDriver = new ChromeDriver();
    
            // Waiting a bit before closing
            Thread.sleep(7000);
    
            // Closing the browser and WebDriver
            webDriver.close();
            webDriver.quit();
        }
    }
    
  5. Save your file, and click on the play button to run your code.
    Run Eclipse project
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    Chrome Browser blank window
    The Chrome Browser opens itself magically. (Large preview)

Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;

/**
 * @author Nils Schuette via frontendtest.org
 */
public class RunTest {
    static WebDriver webDriver;
    /**
     * @param args
     * @throws InterruptedException
     */
    public static void main(final String[] args) throws InterruptedException {
        // Telling the system where to find the chrome driver
        System.setProperty(
                "webdriver.chrome.driver",
                "C:/PATH/TO/chromedriver.exe");

        // Open the Chrome browser
        webDriver = new ChromeDriver();

        // Maximize the browser window
        webDriver.manage().window().maximize();

        if (testWordpresslogin()) {
            System.out.println("Test WordPress Login: Passed");
        } else {
            System.out.println("Test WordPress Login: Failed");

        }

        // Close the browser and WebDriver
        webDriver.close();
        webDriver.quit();
    }

    private static boolean testWordpresslogin() {
        try {
            // Open google.com
            webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

            // Type in the username
            webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

            // Type in the password
            webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

            // Click the Submit button
            webDriver.findElement(By.id("wp-submit")).click();

            // Wait a little bit (7000 milliseconds)
            Thread.sleep(7000);

            // Check whether the h1 equals “Dashboard”
            if (webDriver.findElement(By.tagName("h1")).getText()
                    .equals("Dashboard")) {
                return true;
            } else {
                return false;
            }

        // If anything goes wrong, return false.
        } catch (final Exception e) {
            System.out.println(e.getClass().toString());
            return false;
        }
    }
}

If you have done everything correctly, your output in the Eclipse console should look something like this:

Eclipse console test result.
The Eclipse console states that our first test has passed. (Large preview)

Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver();
    webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try{…}catch{…} blocks. If everything goes as expected, only the code in try{…} will be executed, but if anything goes wrong while executing try{…}, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch{…}. In my example, the test will be marked as “failed” whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");
    webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");
    webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");

    Wordpress login form
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be “Dashboard,” referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) 
        {
            return true;
        } else {
            return false;
        }
    

    Wordpress Dashboard
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) {
                System.out.println(e.getClass().toString());
                return false;
            }

Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

Smashing Editorial
(rb, ra, al, il)

Source: Smashing Magazine, Automating Your Feature Testing With Selenium WebDriver

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

dreamt up by webguru in Uncategorized | Comments Off on Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Lou Franco



Since iOS 5, Siri has helped iPhone users send messages, set reminders and look up restaurants with Apple’s apps. Starting in iOS 10, we have been able to use Siri in some of our own apps as well.

In order to use this functionality, your app must fit within Apple’s predefined Siri “domains and intents.” In this article, we’ll learn about what those are and see whether our apps can use them. We’ll take a simple app that is a to-do list manager and learn how to add Siri support. We’ll also go through the Apple developer website’s guidelines on configuration and Swift code for a new type of extension that was introduced with SiriKit: the Intents extension.

When you get to the coding part of this article, you will need Xcode (at least version 9.x), and it would be good if you are familiar with iOS development in Swift because we’re going to add Siri to a small working app. We’ll go through the steps of setting up a extension on Apple’s developer website and of adding the Siri extension code to the app.

“Hey Siri, Why Do I Need You?”

Sometimes I use my phone while on my couch, with both hands free, and I can give the screen my full attention. Maybe I’ll text my sister to plan our mom’s birthday or reply to a question in Trello. I can see the app. I can tap the screen. I can type.

But I might be walking around my town, listening to a podcast, when a text comes in on my watch. My phone is in my pocket, and I can’t easily answer while walking.

With Siri, I can hold down my headphone’s control button and say, “Text my sister that I’ll be there by two o’clock.” Siri is great when you are on the go and can’t give full attention to your phone or when the interaction is minor, but it requires several taps and a bunch of typing.

This is fine if I want to use Apple apps for these interactions. But some categories of apps, like messaging, have very popular alternatives. Other activities, such as booking a ride or reserving a table in a restaurant, are not even possible with Apple’s built-in apps but are perfect for Siri.

Apple’s Approach To Voice Assistants

To enable Siri in third-party apps, Apple had to decide on a mechanism to take the sound from the user’s voice and somehow get it to the app in a way that it could fulfill the request. To make this possible, Apple requires the user to mention the app’s name in the request, but they had several options of what to do with the rest of the request.

  • It could have sent a sound file to the app.
    The benefit of this approach is that the app could try to handle literally any request the user might have for it. Amazon or Google might have liked this approach because they already have sophisticated voice-recognition services. But most apps would not be able to handle this very easily.
  • It could have turned the speech into text and sent that.
    Because many apps don’t have sophisticated natural-language implementations, the user would usually have to stick to very particular phrases, and non-English support would be up to the app developer to implement.
  • It could have asked you to provide a list of phrases that you understand.
    This mechanism is closer to what Amazon does with Alexa (in its “skills” framework), and it enables far more uses of Alexa than SiriKit can currently handle. In an Alexa skill, you provide phrases with placeholder variables that Alexa will fill in for you. For example, “Alexa, remind me at $TIME$ to $REMINDER$” — Alexa will run this phrase against what the user has said and tell you the values for TIME and REMINDER. As with the previous mechanism, the developer needs to do all of the translation, and there isn’t a lot of flexibility if the user says something slightly different.
  • It could define a list of requests with parameters and send the app a structured request.
    This is actually what Apple does, and the benefit is that it can support a variety of languages, and it does all of the work to try to understand all of the ways a user might phrase a request. The big downside is that you can only implement handlers for requests that Apple defines. This is great if you have, for example, a messaging app, but if you have a music-streaming service or a podcast player, you have no way to use SiriKit right now.

Similarly, there are three ways for apps to talk back to the user: with sound, with text that gets converted, or by expressing the kind of thing you want to say and letting the system figure out the exact way to express it. The last solution (which is what Apple does) puts the burden of translation on Apple, but it gives you limited ways to use your own words to describe things.

The kinds of requests you can handle are defined in SiriKit’s domains and intents. An intent is a type of request that a user might make, like texting a contact or finding a photo. Each intent has a list of parameters — for example, texting requires a contact and a message.

A domain is just a group of related intents. Reading a text and sending a text are both in the messaging domain. Booking a ride and getting a location are in the ride-booking domain. There are domains for making VoIP calls, starting workouts, searching for photos and a few more things. SiriKit’s documentation contains a full list of domains and their intents.

A common criticism of Siri is that it seems unable to handle requests as well as Google and Alexa, and that the third-party voice ecosystem enabled by Apple’s competitors is richer.

I agree with those criticisms. If your app doesn’t fit within the current intents, then you can’t use SiriKit, and there’s nothing you can do. Even if your app does fit, you can’t control all of the words Siri says or understands; so, if you have a particular way of talking about things in your app, you can’t always teach that to Siri.

The hope of iOS developers is both that Apple will greatly expand its list of intents and that its natural language processing becomes much better. If it does that, then we will have a voice assistant that works without developers having to do translation or understand all of the ways of saying the same thing. And implementing support for structured requests is actually fairly simple to do — a lot easier than building a natural language parser.

Another big benefit of the intents framework is that it is not limited to Siri and voice requests. Even now, the Maps app can generate an intents-based request of your app (for example, a restaurant reservation). It does this programmatically (not from voice or natural language). If Apple allowed apps to discover each other’s exposed intents, we’d have a much better way for apps to work together, (as opposed to x-callback style URLs).

Finally, because an intent is a structured request with parameters, there is a simple way for an app to express that parameters are missing or that it needs help distinguishing between some options. Siri can then ask follow-up questions to resolve the parameters without the app needing to conduct the conversation.

The Ride-Booking Domain

To understand domains and intents, let’s look at the ride-booking domain. This is the domain that you would use to ask Siri to get you a Lyft car.

Apple defines how to ask for a ride and how to get information about it, but there is actually no built-in Apple app that can actually handle this request. This is one of the few domains where a SiriKit-enabled app is required.

You can invoke one of the intents via voice or directly from Maps. Some of the intents for this domain are:

  • Request a ride
    Use this one to book a ride. You’ll need to provide a pick-up and drop-off location, and the app might also need to know your party’s size and what kind of ride you want. A sample phrase might be, “Book me a ride with <appname>.”
  • Get the ride’s status
    Use this intent to find out whether your request was received and to get information about the vehicle and driver, including their location. The Maps app uses this intent to show an updated image of the car as it is approaching you.
  • Cancel a ride
    Use this to cancel a ride that you have booked.

For any of this intents, Siri might need to know more information. As you’ll see when we implement an intent handler, your Intents extension can tell Siri that a required parameter is missing, and Siri will prompt the user for it.

The fact that intents can be invoked programmatically by Maps shows how intents might enable inter-app communication in the future.

Note: You can get a full list of domains and their intents on Apple’s developer website. There is also a sample Apple app with many domains and intents implemented, including ride-booking.

Adding Lists And Notes Domain Support To Your App

OK, now that we understand the basics of SiriKit, let’s look at how you would go about adding support for Siri in an app that involves a lot of configuration and a class for each intent you want to handle.

The rest of this article consists of the detailed steps to add Siri support to an app. There are five high-level things you need to do:

  1. Prepare to add a new extension to the app by creating provisioning profiles with new entitlements for it on Apple’s developer website.
  2. Configure your app (via its plist) to use the entitlements.
  3. Use Xcode’s template to get started with some sample code.
  4. Add the code to support your Siri intent.
  5. Configure Siri’s vocabulary via plists.

Don’t worry: We’ll go through each of these, explaining extensions and entitlements along the way.

To focus on just the Siri parts, I’ve prepared a simple to-do list manager, List-o-Mat.

An animated GIF showing a demo of List-o-Mat
Making lists in List-o-Mat (Large preview)

You can find the full source of the sample, List-o-Mat, on GitHub.

To create it, all I did was start with the Xcode Master-Detail app template and make both screens into a UITableView. I added a way to add and delete lists and items, and a way to check off items as done. All of the navigation is generated by the template.

To store the data, I used the Codable protocol, (introduced at WWDC 2017), which turns structs into JSON and saves it in a text file in the documents folder.

I’ve deliberately kept the code very simple. If you have any experience with Swift and making view controllers, then you should have no problem with it.

Now we can go through the steps of adding SiriKit support. The high-level steps would be the same for any app and whichever domain and intents you plan to implement. We’ll mostly be dealing with Apple’s developer website, editing plists and writing a bit of Swift.

For List-o-Mat, we’ll focus on the lists and notes domain, which is broadly applicable to things like note-taking apps and to-do lists.

In the lists and notes domain, we have the following intents that would make sense for our app.

  • Get a list of tasks.
  • Add a new task to a list.

Because the interactions with Siri actually happen outside of your app (maybe even when you app is not running), iOS uses an extension to implement this.

The Intents Extension

If you have not worked with extensions, you’ll need to know three main things:

  1. An extension is a separate process. It is delivered inside of your app’s bundle, but it runs completely on its own, with its own sandbox.
  2. Your app and extension can communicate with each other by being in the same app group. The easiest way is via the group’s shared sandbox folders (so, they can read and write to the same files if you put them there).
  3. Extensions require their own app IDs, profiles and entitlements.

To add an extension to your app, start by logging into your developer account and going to the “Certificates, Identifiers, & Profiles” section.

Updating Your Apple Developer App Account Data

In our Apple developer account, the first thing we need to do is create an app group. Go to the “App Groups” section under “Identifiers” and add one.

A screenshot of the Apple developer website dialog for registering an app group
Registering an app group (Large preview)

It must start with group, followed by your usual reverse domain-based identifier. Because it has a prefix, you can use your app’s identifier for the rest.

Then, we need to update our app’s ID to use this group and to enable Siri:

  1. Go to the “App IDs” section and click on your app’s ID;
  2. Click the “Edit” button;
  3. Enable app groups (if not enabled for another extension).
    A screenshot of Apple developer website enabling app groups for an app ID
    Enable app groups (Large preview)
  4. Then configure the app group by clicking the “Edit” button. Choose the app group from before.
    A screenshot of the Apple developer website dialog to set the app group name
    Set the name of the app group (Large preview)
  5. Enable SiriKit.
    A screenshot of SiriKit being enabled
    Enable SiriKit (Large preview)
  6. Click “Done” to save it.

Now, we need to create a new app ID for our extension:

  1. In the same “App IDs” section, add a new app ID. This will be your app’s identifier, with a suffix. Do not use just Intents as a suffix because this name will become your module’s name in Swift and would then conflict with the real Intents.
    A screenshot of the Apple developer screen to create an app ID
    Create an app ID for the Intents extension (Large preview)
  2. Enable this app ID for app groups as well (and set up the group as we did before).

Now, create a development provisioning profile for the Intents extension, and regenerate your app’s provisioning profile. Download and install them as you would normally do.

Now that our profiles are installed, we need to go to Xcode and update the app’s entitlements.

Updating Your App’s Entitlements In Xcode

Back in Xcode, choose your project’s name in the project navigator. Then, choose your app’s main target, and go to the “Capabilities” tab. In there, you will see a switch to turn on Siri support.

A screenshot of Xcode’s entitlements screen showing SiriKit is enabled
Enable SiriKit in your app’s entitlements. (Large preview)

Further down the list, you can turn on app groups and configure it.

A screenshot of Xcode's entitlements screen showing the app group is enabled and configured
Configure the app’s app group (Large preview)

If you have set it up correctly, you’ll see this in your app’s .entitlements file:

A screenshot of the App's plist showing that the entitlements are set
The plist shows the entitlements that you set (Large preview)

Now, we are finally ready to add the Intents extension target to our project.

Adding The Intents Extension

We’re finally ready to add the extension. In Xcode, choose “File” → “New Target.” This sheet will pop up:

A screenshot showing the Intents extension in the New Target dialog in Xcode
Add the Intents extension to your project (Large preview)

Choose “Intents Extension” and click the “Next” button. Fill out the following screen:

A screeenshot from Xcode showing how you configure the Intents extension
Configure the Intents extension (Large preview)

The product name needs to match whatever you made the suffix in the intents app ID on the Apple developer website.

We are choosing not to add an intents UI extension. This isn’t covered in this article, but you could add it later if you need one. Basically, it’s a way to put your own branding and display style into Siri’s visual results.

When you are done, Xcode will create an intents handler class that we can use as a starting part for our Siri implementation.

The Intents Handler: Resolve, Confirm And Handle

Xcode generated a new target that has a starting point for us.

The first thing you have to do is set up this new target to be in the same app group as the app. As before, go to the “Capabilities” tab of the target, and turn on app groups, and configure it with your group name. Remember, apps in the same group have a sandbox that they can use to share files with each other. We need this in order for Siri requests to get to our app.

List-o-Mat has a function that returns the group document folder. We should use it whenever we want to read or write to a shared file.

func documentsFolder() -> URL? {
    return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.com.app-o-mat.ListOMat")
}

For example, when we save the lists, we use this:

func save(lists: Lists) {
    guard let docsDir = documentsFolder() else {
        fatalError("no docs dir")
    }

    let url = docsDir.appendingPathComponent(fileName, isDirectory: false)

    // Encode lists as JSON and save to url
}

The Intents extension template created a file named IntentHandler.swift, with a class named IntentHandler. It also configured it to be the intents’ entry point in the extension’s plist.

A screenshot from Xcode showing how the IntentHandler is configured as an entry point
The intent extension plist configures IntentHandler as the entry point

In this same plist, you will see a section to declare the intents we support. We’re going to start with the one that allows searching for lists, which is named INSearchForNotebookItemsIntent. Add it to the array under IntentsSupported.

A screenshot in Xcode showing that the extension plist should list the intents it handles
Add the intent’s name to the intents plist (Large preview)

Now, go to IntentHandler.swift and replace its contents with this code:

import Intents

class IntentHandler: INExtension {
    override func handler(for intent: INIntent) -> Any? {
        switch intent {
        case is INSearchForNotebookItemsIntent:
            return SearchItemsIntentHandler()
        default:
            return nil
        }
    }
}

The handler function is called to get an object to handle a specific intent. You can just implement all of the protocols in this class and return self, but we’ll put each intent in its own class to keep it better organized.

Because we intend to have a few different classes, let’s give them a common base class for code that we need to share between them:

class ListOMatIntentsHandler: NSObject {
}

The intents framework requires us to inherit from NSObject. We’ll fill in some methods later.

We start our search implementation with this:

class SearchItemsIntentHandler: ListOMatIntentsHandler,
                                                       INSearchForNotebookItemsIntentHandling {
}

To set an intent handler, we need to implement three basic steps

  1. Resolve the parameters.
    Make sure required parameters are given, and disambiguate any you don’t fully understand.
  2. Confirm that the request is doable.
    This is often optional, but even if you know that each parameter is good, you might still need access to an outside resource or have other requirements.
  3. Handle the request.
    Do the thing that is being requested.

INSearchForNotebookItemsIntent, the first intent we’ll implement, can be used as a task search. The kinds of requests we can handle with this are, “In List-o-Mat, show the grocery store list” or “In List-o-Mat, show the store list.”

Aside: “List-o-Mat” is actually a bad name for a SiriKit app because Siri has a hard time with hyphens in apps. Luckily, SiriKit allows us to have alternate names and to provide pronunciation. In the app’s Info.plist, add this section:

A screenshot from Xcode showing that the app plist can add alternate app names and pronunciations
Add alternate app name’s and pronunciation guides to the app plist

This allows the user to say “list oh mat” and for that to be understood as a single word (without hyphens). It doesn’t look ideal on the screen, but without it, Siri sometimes thinks “List” and “Mat” are separate words and gets very confused.

Resolve: Figuring Out The Parameters

For a search for notebook items, there are several parameters:

  1. the item type (a task, a task list, or a note),
  2. the title of the item,
  3. the content of the item,
  4. the completion status (whether the task is marked done or not),
  5. the location it is associated with,
  6. the date it is associated with.

We require only the first two, so we’ll need to write resolve functions for them. INSearchForNotebookItemsIntent has methods for us to implement.

Because we only care about showing task lists, we’ll hardcode that into the resolve for item type. In SearchItemsIntentHandler, add this:

func resolveItemType(for intent: INSearchForNotebookItemsIntent,
                         with completion: @escaping (INNotebookItemTypeResolutionResult) -> Void) {

    completion(.success(with: .taskList))
}

So, no matter what the user says, we’ll be searching for task lists. If we wanted to expand our search support, we’d let Siri try to figure this out from the original phrase and then just use completion(.needsValue()) if the item type was missing. Alternatively, we could try to guess from the title by seeing what matches it. In this case, we would complete with success when Siri knows what it is, and we would use completion(.notRequired()) when we are going to try multiple possibilities.

Title resolution is a little trickier. What we want is for Siri to use a list if it finds one with an exact match for what you said. If it’s unsure or if there is more than one possibility, then we want Siri to ask us for help in figuring it out. To do this, SiriKit provides a set of resolution enums that let us express what we want to happen next.

So, if you say “Grocery Store,” then Siri would have an exact match. But if you say “Store,” then Siri would present a menu of matching lists.

We’ll start with this function to give the basic structure:

func resolveTitle(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
    guard let title = intent.title else {
        completion(.needsValue())
        return
    }

    let possibleLists = getPossibleLists(for: title)
    completeResolveListName(with: possibleLists, for: title, with: completion)
}

We’ll implement getPossibleLists(for:) and completeResolveListName(with:for:with:) in the ListOMatIntentsHandler base class.

getPossibleLists(for:) needs to try to fuzzy match the title that Siri passes us with the actual list names.

public func getPossibleLists(for listName: INSpeakableString) -> [INSpeakableString] {
    var possibleLists = [INSpeakableString]()
    for l in loadLists() {
        if l.name.lowercased() == listName.spokenPhrase.lowercased() {
            return [INSpeakableString(spokenPhrase: l.name)]
        }
        if l.name.lowercased().contains(listName.spokenPhrase.lowercased()) || listName.spokenPhrase.lowercased() == "all" {
            possibleLists.append(INSpeakableString(spokenPhrase: l.name))
        }
    }
    return possibleLists
}

We loop through all of our lists. If we get an exact match, we’ll return it, and if not, we’ll return an array of possibilities. In this function, we’re simply checking to see whether the word the user said is contained in a list name (so, a pretty simple match). This lets “Grocery” match “Grocery Store.” A more advanced algorithm might try to match based on words that sound the same (for example, with the Soundex algorithm),

completeResolveListName(with:for:with:) is responsible for deciding what to do with this list of possibilities.

public func completeResolveListName(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
    switch possibleLists.count {
    case 0:
        completion(.unsupported())
    case 1:
        if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() {
            completion(.success(with: possibleLists[0]))
        } else {
            completion(.confirmationRequired(with: possibleLists[0]))
        }
    default:
        completion(.disambiguation(with: possibleLists))
    }
}

If we got an exact match, we tell Siri that we succeeded. If we got one inexact match, we tell Siri to ask the user if we guessed it right.

If we got multiple matches, then we use completion(.disambiguation(with: possibleLists)) to tell Siri to show a list and let the user pick one.

Now that we know what the request is, we need to look at the whole thing and make sure we can handle it.

Confirm: Check All Of Your Dependencies

In this case, if we have resolved all of the parameters, we can always handle the request. Typical confirm() implementations might check the availability of external services or check authorization levels.

Because confirm() is optional, we could just do nothing, and Siri would assume we could handle any request with resolved parameters. To be explicit, we could use this:

func confirm(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) {
    completion(INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil))
}

This means we can handle anything.

Handle: Do It

The final step is to handle the request.

func handle(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) {
    guard
        let title = intent.title,
        let list = loadLists().filter({ $0.name.lowercased() == title.spokenPhrase.lowercased()}).first
    else {
        completion(INSearchForNotebookItemsIntentResponse(code: .failure, userActivity: nil))
        return
    }

    let response = INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil)
    response.tasks = list.items.map {
        return INTask(title: INSpeakableString(spokenPhrase: $0.name),
                      status: $0.done ? INTaskStatus.completed : INTaskStatus.notCompleted,
                      taskType: INTaskType.notCompletable,
                      spatialEventTrigger: nil,
                      temporalEventTrigger: nil,
                      createdDateComponents: nil,
                      modifiedDateComponents: nil,
                      identifier: "(list.name)t($0.name)")
    }
    completion(response)
}

First, we find the list based on the title. At this point, resolveTitle has already made sure that we’ll get an exact match. But if there’s an issue, we can still return a failure.

When we have a failure, we have the option of passing a user activity. If your app uses Handoff and has a way to handle this exact type of request, then Siri might try deferring to your app to try the request there. It will not do this when we are in a voice-only context (for example, you started with “Hey Siri”), and it doesn’t guarantee that it will do it in other cases, so don’t count on it.

This is now ready to test. Choose the intent extension in the target list in Xcode. But before you run it, edit the scheme.

A screenshot from Xcode showing how to edit a scheme
Edit the scheme of the the intent to add a sample phrase for debugging.

That brings up a way to provide a query directly:

A screenshot from Xcode showing the edit scheme dialog
Add the sample phrase to the Run section of the scheme. (Large preview)

Notice, I am using “ListOMat” because of the hyphens issue mentioned above. Luckily, it’s pronounced the same as my app’s name, so it should not be much of an issue.

Back in the app, I made a “Grocery Store” list and a “Hardware Store” list. If I ask Siri for the “store” list, it will go through the disambiguation path, which looks like this:

An animated GIF showing Siri handling a request to show the Store list
Siri handles the request by asking for clarification. (Large preview)

If you say “Grocery Store,” then you’ll get an exact match, which goes right to the results.

Adding Items Via Siri

Now that we know the basic concepts of resolve, confirm and handle, we can quickly add an intent to add an item to a list.

First, add INAddTasksIntent to the extension’s plist:

A screenshot in XCode showing the new intent being added to the plist
Add the INAddTasksIntent to the extension plist (Large preview)

Then, update our IntentHandler’s handle function.

override func handler(for intent: INIntent) -> Any? {
    switch intent {
    case is INSearchForNotebookItemsIntent:
        return SearchItemsIntentHandler()
    case is INAddTasksIntent:
        return AddItemsIntentHandler()
    default:
        return nil
    }
}

Add a stub for the new class:

class AddItemsIntentHandler: ListOMatIntentsHandler, INAddTasksIntentHandling {
}

Adding an item needs a similar resolve for searching, except with a target task list instead of a title.

func resolveTargetTaskList(for intent: INAddTasksIntent, with completion: @escaping (INTaskListResolutionResult) -> Void) {

    guard let title = intent.targetTaskList?.title else {
        completion(.needsValue())
        return
    }

    let possibleLists = getPossibleLists(for: title)
    completeResolveTaskList(with: possibleLists, for: title, with: completion)
}

completeResolveTaskList is just like completeResolveListName, but with slightly different types (a task list instead of the title of a task list).

public func completeResolveTaskList(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INTaskListResolutionResult) -> Void) {

    let taskLists = possibleLists.map {
        return INTaskList(title: $0, tasks: [], groupName: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil)
    }

    switch possibleLists.count {
    case 0:
        completion(.unsupported())
    case 1:
        if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() {
            completion(.success(with: taskLists[0]))
        } else {
            completion(.confirmationRequired(with: taskLists[0]))
        }
    default:
        completion(.disambiguation(with: taskLists))
    }
}

It has the same disambiguation logic and behaves in exactly the same way. Saying “Store” needs to be disambiguated, and saying “Grocery Store” would be an exact match.

We’ll leave confirm unimplemented and accept the default. For handle, we need to add an item to the list and save it.

func handle(intent: INAddTasksIntent, completion: @escaping (INAddTasksIntentResponse) -> Void) {
    var lists = loadLists()
    guard
        let taskList = intent.targetTaskList,
        let listIndex = lists.index(where: { $0.name.lowercased() == taskList.title.spokenPhrase.lowercased() }),
        let itemNames = intent.taskTitles, itemNames.count > 0
    else {
            completion(INAddTasksIntentResponse(code: .failure, userActivity: nil))
            return
    }

    // Get the list
    var list = lists[listIndex]

    // Add the items
    var addedTasks = [INTask]()
    for item in itemNames {
        list.addItem(name: item.spokenPhrase, at: list.items.count)
        addedTasks.append(INTask(title: item, status: .notCompleted, taskType: .notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil))
    }

    // Save the new list
    lists[listIndex] = list
    save(lists: lists)

    // Respond with the added items
    let response = INAddTasksIntentResponse(code: .success, userActivity: nil)
    response.addedTasks = addedTasks
    completion(response)
}

We get a list of items and a target list. We look up the list and add the items. We also need to prepare a response for Siri to show with the added items and send it to the completion function.

This function can handle a phrase like, “In ListOMat, add apples to the grocery list.” It can also handle a list of items like, “rice, onions and olives.”

A screenshot of the simulator showing Siri adding items to the grocery store list
Siri adds a few items to the grocery store list

Almost Done, Just A Few More Settings

All of this will work in your simulator or local device, but if you want to submit this, you’ll need to add a NSSiriUsageDescription key to your app’s plist, with a string that describes what you are using Siri for. Something like “Your requests about lists will be sent to Siri” is fine.

You should also add a call to:

INPreferences.requestSiriAuthorization { (status) in }

Put this in your main view controller’s viewDidLoad to ask the user for Siri access. This will show the message you configured above and also let the user know that they could be using Siri for this app.

A screenshot of the dialog that a device pops up when you ask for Siri permission
The device will ask for permission if you try to use Siri in the app.

Finally, you’ll need to tell Siri what to tell the user if the user asks what your app can do, by providing some sample phrases:

  1. Create a plist file in your app (not the extension), named AppIntentVocabulary.plist.
  2. Fill out the intents and phrases that you support.
A screenshot of the AppIntentVocabulary.plist showing sample phrases
Add an AppIntentVocabulary.plist to list the sample phrases that will invoke the intent you handle. (Large preview)

There is no way to really know all of the phrases that Siri will use for an intent, but Apple does provide a few samples for each intent in its documentation. The sample phrases for task-list searching show us that Siri can understand “Show me all my notes on <appName>,” but I found other phrases by trial and error (for example, Siri understands what “lists” are too, not just notes).

Summary

As you can see, adding Siri support to an app has a lot of steps, with a lot of configuration. But the code needed to handle the requests was fairly simple.

There are a lot of steps, but each one is small, and you might be familiar with a few of them if you have used extensions before.

Here is what you’ll need to prepare for a new extension on Apple’s developer website:

  1. Make an app ID for an Intents extension.
  2. Make an app group if you don’t already have one.
  3. Use the app group in the app ID for the app and extension.
  4. Add Siri support to the app’s ID.
  5. Regenerate the profiles and download them.

And here are the steps in Xcode for creating Siri’s Intents extension:

  1. Add an Intents extension using the Xcode template.
  2. Update the entitlements of the app and extension to match the profiles (groups and Siri support).
  3. Add your intents to the extension’s plist.

And you’ll need to add code to do the following things:

  1. Use the app group sandbox to communicate between the app and extension.
  2. Add classes to support each intent with resolve, confirm and handle functions.
  3. Update the generated IntentHandler to use those classes.
  4. Ask for Siri access somewhere in your app.

Finally, there are some Siri-specific configuration settings:

  1. Add the Siri support security string to your app’s plist.
  2. Add sample phrases to an AppIntentVocabulary.plist file in your app.
  3. Run the intent target to test; edit the scheme to provide the phrase.

OK, that is a lot, but if your app fits one of Siri’s domains, then users will expect that they can interact with it via voice. And because the competition for voice assistants is so good, we can only expect that WWDC 2018 will bring a bunch more domains and, hopefully, much better Siri.

Further Reading

Smashing Editorial
(da, ra, al, il)

Source: Smashing Magazine, Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

1 2 3 4 5 6 7 8 9 10 ... 20 21   Next Page »