featured

Why “Yarn 2” is actually Yarn 3

What do we know as Yarn 2?

It’s the modern version of Yarn that comes with important upgrades to the package manager including PNMP-style symlinks, and an innovative new Plug ‘n’ Play module installation method for much-reduced project sizes and rapid installations.

But after migrating from Yarn 1, you’ll find something interesting, as I did – the thing widely known as Yarn 2 is actually… version 3?

After migrating to "Yarn 2" and checking the version, it was shown to be version 3.

Why is “Yarn 2” using version 3?

It’s because Yarn 1 served as the initial codebase which was completely overhauled in the Yarn v2.0 (the actual version 2), enhancing its efficiency and effectiveness, with its launch taking place in January 2020. As time moved on, the introduction of a fresh major, Yarn v3.0, happened, thankfully without the need for another codebase rewrite. The upcoming major update is expected to be Yarn v4.0, and so on.

Despite the historical tendency of releasing few major updates, there was a growing trend among some individuals to label everything that used the new codebase as “Yarn 2”, which includes Yarn 2.x versions and future ones such as 3.x. This, however, was a misinterpretation as “Yarn 2” strictly refers to the 2.x versions. A more accurate way to reference the new codebase would be “Yarn 2+” or “Yarn Berry” – a codename that the team selected for the new codebase when they started developing it.

As once stated by one of the maintainers in a related GitHub discussion:

Some people have started to colloquially call “Yarn 2” everything using this new codebase, so Yarn 2.x and beyond (including 3.x). This is incorrect though (“Yarn 2” is really just 2.x), and a better term to refer to the new codebase would be Yarn 2+, or Yarn Berry (which is the codename I picked for the new codebase when I started working on it).

arcanis, a Yarn maintainer

How to migrate from Yarn v1 to Yarn Berry

A Yarn Berry installation in progress.
A Yarn Berry installation in progress.

If you’re still using Yarn version 1 – or worse, NPM – you’re missing out.

The new Yarn is loaded with a sizable number of upgrades that will significantly improve your developer experience when you start using it. These range from notable improvements in stability, flexibility, and extensibility, to brand new features, like Constraints.

You can migrate from Yarn v1 to Yarn Berry in 7 easy steps:

  1. Make sure you’re using Node version 18+.
  2. Run corepack enable to activate Corepack.
  3. Navigate to your project directory.
  4. Run yarn set version berry.
  5. Convert your .npmrc and .yarnrc files into .yarnrc.yml (as explained here).
  6. Run yarn install to migrate the lockfile.
  7. Commit all changes.

In case you experience any issues due to breaking changes, this official Yarn Berry migration guide should help.

Final thoughts

The Yarn versioning saga teaches us an important lesson: terminology matters.

What many of us dub as “Yarn 2” is actually “Yarn 2+” or “Yarn Berry”, the game-changing codebase. This misnomer emphasizes our need to stay current, not just with evolving tools and features, but with their rightful names as well. After all, how we understand and converse about these improvements shapes our effectiveness and fluency as developers.

Fine-tuning for OpenAI’s GPT-3.5 Turbo model is finally here

Some great news lately for AI developers from OpenAI.

Finally, you can now fine-tune the GPT-3.5 Turbo model using your own data. This gives you the ability to create customized versions of the OpenAI model that perform incredibly well at specific tasks and give responses in a customized format and tone, perfect for your use case.

For example, we can use fine-tuning to ensure that our model always responds in a JSON format, containing Spanish, with a friendly, informal tone. Or we could make a model that only gives one out of a finite set of responses, e.g., rating customer reviews as critical, positive, or neutral, according to how *we* define these terms.

As stated by OpenAI, early testers have successfully used fine-tuning in various areas, such as being able to:

  • Make the model output results in a more consistent and reliable format.
  • Match a specific brand’s style and messaging.
  • Improve how well the model follows instructions.

The company also claims that fine-tuned GPT-3.5 Turbo models can match and even exceed the capabilities of base GPT-4 for certain tasks.

Before now, fine-tuning was only possible with weaker, costlier GPT-3 models, like davinci-002 and babbage-002. Providing custom data for a GPT-3.5 Turbo model was only possible with techniques like few-shot prompting and vector embedding.

OpenAI also assures that any data used for fine-tuning any of their models belongs to the customer, and then don’t use it to train their models.

What is GPT-3.5 Turbo, anyway?

Launched earlier this year, GPT-3.5 Turbo is a model range that OpenAI introduced, stating that it is perfect for applications that do not solely focus on chat. It boasts the capability to manage 4,000 tokens at once, a figure that is twice the capacity of the preceding model. The company highlighted that preliminary users successfully shortened their prompts by 90% after applying fine-tuning on the GPT-3.5 Turbo model.

What can I use GPT-3.5 Turbo fine-tuning for?

  • Customer service automation: We can use a fine-tuned GPT model to make virtual customer service agents or chatbots that deliver responses in line with the brand’s tone and messaging.
  • Content generation: The model can be used for generating marketing content, blog posts, or social media posts. The fine-tuning would allow the model to generate content in a brand-specific style according to prompts given.
  • Code generation & auto-completion: In software development, such a model can provide developers with code suggestions and autocompletion to boost their productivity and get coding done faster.
  • Translation: We can use a fine-tuned GPT model for translation tasks, converting text from one language to another with greater precision. For example, the model can be tuned to follow specific grammatical and syntactical rules of different languages, which can lead to higher accuracy translations.
  • Text summarization: We can apply the model in summarizing lengthy texts such as articles, reports, or books. After fine-tuning, it can consistently output summaries that capture the key points and ideas without distorting the original meaning. This could be particularly useful for educational platforms, news services, or any scenario where digesting large amounts of information quickly is crucial.

How much will GPT-3.5 Turbo fine-tuning cost?

There’s the cost of fine-tuning and then the actual usage cost.

  • Training: $0.008 / 1K tokens
  • Usage input: $0.012 / 1K tokens
  • Usage output: $0.016 / 1K tokens

For example, a gpt-3.5-turbo fine-tuning job with a training file of 100,000 tokens that is trained for 3 epochs would have an expected cost of $2.40.

OpenAI, GPT 3.5 Turbo fine-tuning and API updates

When will fine-tuning for GPT-4 be available?

This fall.

OpenAI has announced that support for fine-tuning GPT-4, its most recent version of the large language model, is expected to be available later this year, probably during the fall season. This upgraded model has been proven to perform at par with humans across diverse professional and academic benchmarks. It surpasses GPT-3.5 in terms of reliability, creativity, and its capacity to deal with instructions that are more nuanced.

10 powerful JavaScript animation libraries for engaging user experiences

Animations. A fantastic way to stand out from the crowd and grab the attention of your visitors.

With creative object motion and fluid page transitions, you not only add a unique aesthetic appeal to your website but also enhance user engagement and create a memorable first impression.

And creating animations can’t get any easier with these 10 powerful JavaScript libraries. Scroll animations, handwriting animations, SPA page transitions, typing animations, color animations, SVG animations… they are endlessly capable. They are the best.

1. Anime.js

An animation creating with Anime.js
An animation created with Anime.js.

With over 43k stars on GitHub, Anime.js is easily one of the most popular animation libraries out there.

It’s a lightweight JavaScript animation library with a simple API that can be used to animate CSS properties, SVG, DOM attributes, and JavaScript objects. With Anime.js, you can play, pause, restart or reverse an animation. The library also provides staggering features for animating multiple elements with follow-through and overlapping actions. There are various animation-related events also included, which we can listen to using callbacks and Promises.

Visit the Anime.js website

2. Lottie

An animation created with Lottie.js
An animation created with Lottie.

Lottie is a library that parses Adobe After Effects animations exported as JSON with the Bodymovin plugin and renders them natively on mobile and web applications. This eliminates the need to manually recreate the advanced animations created in After Effects by expert designers. The Web version alone has over 27k stars on GitHub.

Visit the Lottie website

3. Velocity

An animation created with Velocity.
An animation created with Velocity.

With Velocity you create color animations, transforms, loops, easings, SVG animations, and more. It uses the same API as the $.animate() method from the jQuery library, and it can integrate with jQuery if it is available. The library provides fade, scroll, and slide effects. Besides being able to control the duration and delay of an animation, you can reverse it sometime after it has been completed, or stop it altogether when it is in progress. It has over 17k stars on GitHub and is a good alternative to Anime.js.

Visit the Velocity website

4. Rough Notation

Som Rough Notation annotation styles.
Some Rough Notation annotation styles.

Rough Notation is a JavaScript library for creating and animating colorful annotations on a web page. It uses RoughJS to create a hand-drawn look and feel. You can create several annotation styles, including underline, box, circle, highlight, strike-through, etc., and control the duration and color of each annotation style.

Visit the Rough Notation website

5. Popmotion

An animation created with Popmotion.
An animation created with Popmotion.

Popmotion is a functional library for creating prominent and attention-grabbing animations. What makes it stand out? – there are zero assumptions about the object properties you intend to animate, but instead provides simple, composable functions that can be used in any JavaScript environment.

The library supports keyframes, spring and inertia animations on numbers, colors, and complex strings. It is well-tested, actively maintained, and has over 19k stars on GitHub.

Visit the Popmotion website

6. Vivus

An animation created with Vivus.
An animation created with Vivus.

Vivus is a JavaScript library that allows you to animate SVGs, giving them the appearance of being drawn. It is fast and lightweight with exactly zero dependencies, and provides three different ways to animate SVGs: Delayed, Sync, and OneByOne. You can also use a custom script to draw an SVG in your preferred way.

Vivus also allows you to customize the duration, delay, timing function, and other animation settings. Check out Vivus Instant for live, hands-on examples.

Visit the Vivus website

7. GreenSock Animation Platform (GSAP)

An animation created with GSAP

The GreenSock Animation Platform (GSAP) is a library that lets you create wonderful animations that work across all major browsers. You can use it in React, Vue, WebGL, and the HTML canvas to animate colors, strings, motion paths, and more. It also comes with a ScrollTrigger plugin that lets you create impressive scroll-based animations with little code.

Used in over 11 million sites, with over 15k stars on GitHub, it is a versatile and popular indeed. You can use the GSDevTools from GreenSock to easily debug animations created with GSAP.

Visit the GSAP website

8. Three.js

An animation created with Three.js
An animation created with Three.js

Three.js is a lightweight library for displaying complex 3D objects and animations. It makes use of WebGL, SVG, and CSS3D renderers to create engaging three-dimensional experiences that work across a wide range of browsers and devices. It is a well-known library in the JavaScript community, with over 85k stars on GitHub.

Visit Three.js website

9. ScrollReveal

ScrollReveal animations.
ScrollReveal animations.

The ScrollReveal library lets you easily animate a DOM element as it enters or leaves the browser viewport. It provides various types of elegant effects to reveal or hide an element on-scroll in multiple browsers. And quite easy to use too, with with zero dependencies and over 21k stars on GitHub.

Visit the ScrollReveal website

10. Barba.js

Page transitions created with Barba.js.
Page transitions created with Barba.js.

One creative way to make your website outstanding is to add lively transitions between the pages as your users navigate between them. This produces a better user experience than simply displaying the new webpage or reloading the browser.

And that’s why Barba.js is so useful; this library lets you create enjoyable page transitions by making the site run like a Single Page Application (SPA). It reduces the delay between pages and minimizes the number of HTTP requests that the browser makes. It’s gotten almost 11k stars on GitHub.

Visit Barba.js website

Bonus

11. Mo.js

An animation created with Mo.js
An animation created with Mo.js.

Great library for creating compelling motion graphics.

It provides simple, declarative APIs for effortlessly creating smooth animations and effects that look great on devices of various screen sizes. You can move HTML or SVG DOM elements, or you can create a special Mo.js object, which comes with a set of unique capabilities. It is a reliable and well-tested library, with over 1500 tests written and over 17k stars on GitHub.

Visit the Mo.js website

12. Typed.js

An animation created with Typed.js

The name says it all; an animated typing library.

It types out a specific string character by character as if someone was typing in real-time, allowing you pause the typing speed, and even pause the typing for a specific amount of time. With smart backspacing, it types out successive strings starting with the same set of characters as the current one without backspacing the entire preceding string – as we saw in the demo above.

Also included is support for bulk typing, which types out a group of characters on the screen at the same time, instead of one after the other. Typed.js has over 12k stars on GitHub and is trusted by Slack and Envato.

Visit the Typed.js website

Final thoughts

The world of web animation is vast and dynamic, constantly evolving with the advent of new technologies and libraries. The animation libraries highlighted in this article offer an array of features to create engaging, interactive, and visually appealing experiences for users. They are a testament to the power and flexibility of JavaScript, and demonstrate how animations greatly enhance the user experience.

As a developer, harnessing these tools will no doubt elevate your projects, making them stand out in an increasingly competitive digital landscape.

This new ES7 feature made my math 3 times easier

But 5 lines of Java is one line of Python.

How many times have you heard something like that from lovers of the later?

Seems like they love to trash languages they stubbornly believe are verbose. I came to see that โ€œPythonicโ€ is something truly cherished by our friends in the Python community.

Your Python code works, and so what? Where is elegance? Where is readability?

Think you can write a simple for loop and get away with it?

Python
total = 0 for i in range(1, 11): total += i print("The sum of the first 10 numbers is:", total)

Just wait till one of them find outโ€Šโ€”โ€Što say youโ€™ll face severe criticism is an understatement.

Because apparentlyโ€Šโ€”โ€Šand I kind of agreeโ€Šโ€”โ€Šitโ€™s just not โ€œbeautifulโ€ or concise enough.

To be โ€œPythonicโ€ is best.

JavaScript
total = sum(i for i in range(1, 11)) print("The sum of the first 10 numbers is:", total)

An ES7 feature that brings syntactic sugar and conciseness

The ** operator.

This one almost always comes up in Pythonโ€™s favor when talking about language conciseness, up there with generators and the // operator.

Itโ€™s good to know JavaScript now has this feature, over 6 years ago in fact.

But it was surprising to know that a sizeable number of our fellow JavaScripters never knew itโ€™s in the language.

Itโ€™s now effortless to get the power of a number, with the ** operator. Instead of Math.pow(a, b), you do a ** b.

JavaScript
const result = Math.pow(10, 2); console.log(result); // 100 const result2 = Math.pow(2, Math.pow(3, 2)); console.log(result2); const result3 = 10 ** 2; console.log(result3); // 100 const result4 = 2 ** 3 ** 2; console.log(result4) // 512

We don’t need a function for such a common math operation anymore.

You can even pass a decimal number as a power with **Math.pow() can do this too:

JavaScript
const result = Math.pow(49, 1.5); console.log(result); // 343 const result2 = 49 ** 1.5; console.log(result2); // 343

And it’s not only a drop-in replacement for Math.pow(); ** can take BigInts too:

JavaScript
// โŒ Error: Cannot convert a BigInt value to a number const result1 = Math.pow(32n, 2); console.log(result1);
JavaScript
const result2 = 32n ** 2n; console.log(result2); // 1024n

BigInts let us represent numbers of any size without losing precision or experiencing overflow errors.

JavaScript
const veryLargeNumber = 1234567890123456789012345678901234567890n; console.log(typeof veryLargeNumber); // "bigint" console.log(veryLargeNumber * 2n); // 2469135780246913578024691357802469135780n

You can see that we simply add an n at the end of the digits to make it a BigInt.

Final thoughts

Language wars are a fun programmer pastime.

Itโ€™s always fun to debate about which programming language is more elegant and concise.

But at the end of the day, weโ€™ve got to keep in mind that writing readable and maintainable code is what matters most.

In this article, we saw that the ** operator introduced in ES7 for JavaScript is a neat trick that can make your code more concise, and it even works with BigInts!

More features keep getting added every yearโ€Šโ€”โ€ŠES13 was released in 2022โ€Šโ€”โ€Što increase and add more syntactic sugar.

So, keep exploring the possibilities of your favorite programming language, and have fun coding!

How to Simulate a Mouse Click in JavaScript

In this article, we’ll learn multiple ways to easily simulate a mouse click or tap on an element in the HTML DOM, using JavaScript.

Use click() method

This is the easiest and most basic method to simulate a mouse click on an element. Every HTMLElement has a click() method that can be used to simulate clicks on it. It fires the click event on the element when it is called. The click event bubbles up to elements higher in the document tree and fires their click events.

JavaScript
const target = document.querySelector('#target'); target.click();

To use the click() method, you first need to select the element in JavaScript, using a method like querySelector() or getElementById(). You’ll be able to access the HTMLElement object from the method’s return value, and call the click() method on it to simulate the click.

Let’s see an example, where we simulate a button click every second, by calling click() in setInterval().

JavaScript
const target = document.querySelector('#target'); let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; // Programmatically click button every 1 second setInterval(() => { target.click(); clickCount++; clickCountEl.innerText = clickCount; }, 1000);
HTML
<button id="target">Target</button> <br /><br /> Clicks: <span id="click-count"></span>

Result

The button is clicked programmatically every 1 second.
The button is clicked programmatically every 1 second.

Notice how there are no visual indicators of a click occurring because it’s a programmatic click.

Since click() causes the click event to fire on the element, any click event listeners you attach to the element will be invoked from a click() call.

In the following example, we use a click event listener (instead of setInterval) to increase the click count, and another listener to toggle the button’s style when clicked.

JavaScript
const target = document.querySelector('#target'); let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; target.addEventListener('click', () => { clickCount++; clickCountEl.innerText = clickCount; }); target.addEventListener('click', () => { target.classList.toggle('btn-style'); }); setInterval(() => { target.click(); }, 1000);
HTML
<button id="target">Target</button> <br /><br /> Count: <span id="click-count"></span>
CSS
.btn-style { color: white; background-color: blue; border-radius: 2px; }
The click count is incremented and the button’s style is toggled from the simulated click.

Simulate mouse click with MouseEvent object

Alternatively, we can use a custom MouseEvent object to simulate a mouse click in JavaScript.

JavaScript
const targetButton = document.getElementById('target'); const clickEvent = new MouseEvent('click'); targetButton.dispatchEvent(clickEvent);

The MouseEvent interface represents events that occur from the user interacting with a pointing device like the mouse. It can represent common events like click, dblclick, mouseup, and mousedown.

After selecting the HTMLElement and creating a MouseEvent, we call the dispatchEvent() method on the element to fire the event on the element.

For example:

JavaScript
const clickEvent = new MouseEvent('click'); const targetButton = document.getElementById('target'); let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; targetButton.addEventListener('click', () => { clickCount++; clickCountEl.innerText = clickCount; }); setInterval(() => { const clickEvent = new MouseEvent('click'); targetButton.dispatchEvent(clickEvent); }, 1000);
HTML
<button id="target">Target</button> <br /><br /> Clicks: <span id="click-count"></span>
The button is clicked programmatically every 1 second.
The button is clicked programmatically every second with MouseEvent.

Any click event listener attached to the element is called with the MouseEvent object that was passed to dispatchEvent().

JavaScript
const clickEvent = new MouseEvent('click'); const targetButton = document.getElementById('target'); targetButton.addEventListener('click', (event) => { console.log(clickEvent === event); // true }); targetButton.addEventListener('click', (event) => { console.log(clickEvent === event); // true }); targetButton.dispatchEvent(clickEvent);

Result

The dispatched event object is passed to click listeners.
The dispatched MouseEvent is passed to click listeners.

Apart from the event type passed as the first argument, we can pass options to the MouseEvent() constructor to control specific details about the event:

JavaScript
const targetButton = document.getElementById('target'); const clickEvent = new MouseEvent('click', { bubbles: true, cancelable: false, view: window }); targetButton.dispatchEvent(clickEvent);

The bubbles property determines whether the event can bubble up the DOM tree to the element’s containing elements.

The cancelable property determines whether or not the event’s default action can be prevented.

The view property sets the event’s AbstractView. You should pass the window object here.

Find more options in the MDN documentation for the MouseEvent() constructor and the now deprecated MouseEvent.initMouseEvent() method.

Simulate mouse click at position

We can also simulate a mouse click on an element at a specific position on the visible part of the webpage.

We do this by selecting the element with the document.elementFromPoint() method, and then simulating the click with the click() method.

JavaScript
const targetButton = document.elementFromPoint(x, y).click(); targetButton.click();

The elementFromPoint() method returns the topmost element at the specified coordinates, relative to the browser’s viewport. It returns an HTMLElement, which we call click() on to simulate the mouse click from JavaScript.

In the following example, we have a button and a containing div:

HTML
<div id="container"> <button id="target">Target</button> <br /><br /> Count: <span id="click-count"></span> </div>

Using CSS, we position the #container div to make its top-left corner exactly at the position (200, 100) in the viewport.

CSS
#container { position: absolute; top: 100px; left: 200px; height: 100px; width: 100px; border: 1px solid black; }

The #target button is at point (0, 0) in its #container, so point (200, 100) in the viewport.

To get the position of the #target with more certainty, we add an offset of 10px each to get the final position to search for: (210, 110).

JavaScript
let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; const targetButton = document.getElementById('target'); targetButton.addEventListener('click', () => { clickCount++; clickCountEl.innerText = clickCount; }); // Offset of 10px to get button's position in container const x = 200 + 10; const y = 100 + 10; setInterval(() => { const buttonAtPoint = document.elementFromPoint(x, y); buttonAtPoint.click(); }, 1000);

To verify that we’re actually selected the button by its position and it’s getting clicks, we select it by its ID (target) and set a click event listener on the HTMLElement returned, which will increase the count of how many times it has been clicked.

And we’re successful:

A click is simulated on the button at point (210, 110).

Simulate mouse click at position with MouseEvent object

We can also simulate a mouse click on an element at a certain (x, y) position using a MouseEvent object and the dispatchEvent() method.

Here’s how:

JavaScript
const clickEvent = new MouseEvent('click', { view: window, screenX: x, screenY: y, }); const elementAtPoint = document.elementFromPoint(x, y); elementAtPoint.dispatchEvent(clickEvent);

This time we specify the screenX and screenY options when creating the MouseEvent.

screenX sets the position where the click event should occur on the x-axis. It sets the value of the MouseEvent‘s screenX property.

screenY sets the position where the click event should occur on the Y-axis. It sets the value of the MouseEvent‘s screenY property.

So we can use MouseEvent to replace the click() method in our last example:

JavaScript
// ... // Offset of 10px to get button's position in container const x = 200 + 10; const y = 100 + 10; setInterval(() => { const buttonAtPoint = document.elementFromPoint(x, y); const clickEvent = new MouseEvent('click', { view: window, screenX: x, screenY: y, }); buttonAtPoint.dispatchEvent(clickEvent); }, 1000);

Note: screenX and screenY doesn’t decide where the click should occur (elementFromPoint does). These options are set so that they can be accessed from any click event listener attached to the element at the point.

3 Common Mistakes to Avoid When Handling Events in React

In React apps, event listeners or observers perform certain actions when specific events occur. While it’s quite easy to create event listeners in React, there are common pitfalls you need to avoid to prevent confusing errors. These mistakes are made most often by beginners, but it’s not rare for them to be the reason for one of your debugging sessions as a reasonably experienced developer.

In this article, we’ll be exploring some of these common mistakes, and what you should do instead.

1. Accessing state variables without dealing with updates

Take a look at this simple React app. It’s essentially a basic stopwatch app, counting up indefinitely from zero.

JavaScript
import { useState, useEffect } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { const timer = setInterval(() => { setTime(time + 1); }, 1000); return () => { window.clearInterval(timer); } }, []); return ( <div> Seconds: {time} </div> ); }

However, when we run this app, the results are not what we’d expect:

The seconds is stuck at 1.
Stuck at 1

This happens because the time state variable being referred to by the setInterval() callback/closure refers to the stale state that was fresh at the time when the closure was defined.

The closure is only able to access the time variable in the first render (which had a value of 0) but can’t access the new time value in subsequent renders. JavaScript closure remembers the variables from the place where it was defined.

The issue is also due to the fact that the setInterval() closure is defined only once in the component.

The time variable from the first render will always have a value of 0, as React doesn’t mutate a state variable directly when setState is called, but instead creates a new variable containing the new state. So when the setInterval closure is called, it only ever updates the state to 1.

Here are some ways to avoid this mistake and prevent unexpected problems.

1. Pass function to setState

One way to avoid this error is by passing a callback to the state updater function (setState) instead of passing a value directly. React will ensure that the callback always receives the most recent state, avoiding the need to access state variables that might contain old state. It will set the state to the value the callback returns.

Here’s how we apply this for our example:

JavaScript
import { useState, useEffect } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { const timer = setInterval(() => { // ๐Ÿ‘‡ Pass callback setTime((prevTime) => prevTime + 1); }, 1000); return () => { window.clearInterval(timer); } }, []); return ( <div> Seconds: {time} </div> ); }
The time is increased by 1 every second - success.
The time is increased by 1 every second – success.

Now the time state will be incremented by 1 every time the setInterval() callback runs, just like it’s supposed to.

2. Event listener re-registration

Another solution is to re-register the event listener with a new callback every time the state is changed, so the callback always accesses the fresh state from the enclosing scope.

We do this by passing the state variable to useEffect‘s dependencies array:

JavaScript
import { useState, useEffect } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { const timer = setInterval(() => { setTime(time + 1); }, 1000); return () => { window.clearInterval(timer); } }, [time]); return ( <div> Seconds: {time} </div> ); }

Every time the time state is changed, a new callback accessing the fresh state is registered with setInterval(). setTime() is called with the latest time state added to 1, which increments the state value.

2. Registering event handler multiple times

This is a mistake frequently made by developers new to React hooks and functional components. Without a basic understanding of the re-rendering process in React, you might try to register event listeners like this:

JavaScript
import { useState } from 'react'; export default function App() { const [time, setTime] = useState(0); setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); return ( <div> Seconds: {time} </div> ); }

Or you might put it in a useEffect hook like this:

JavaScript
import { useState } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); }); return ( <div> Seconds: {time} </div> ); }

If you do have a basic understanding of this, you should be able to already guess what this will lead to on the web page.

The seconds keep accelerating.
It eventually gets as bad as this.

What’s happening?

What’s happening is that in a functional component, code outside hooks, and outside the returned JSX markup is executed every time the component re-renders.

Here’s a basic breakdown of what happens in a timeline:

  1. 1st render: listener 1 registered
  2. 1 second after listener 1 registration: time state updated, causing another re-render)
  3. 2nd render: listener 2 registered.
  4. Listener 1 never got de-registered after the re-render, so…
  5. 1 second after last listener 1 call: state updated
  6. 3rd render: listener 3 registered.
  7. Listener 2 never got de-registered after the re-render, so…
  8. 1 second after listener 2 registration: state updated
  9. 4th render: listener 4 registered.
  10. 1 second after last listener 1 call: state updated
  11. 5th render: listener 5 registered.
  12. 1 second after last listener 2 call: state updated
  13. 6th render: listener 6 registered.
  14. Listener 3 never got de-registered after the re-render, so…
  15. 1 second after listener 3 registration: state updated.
  16. 7th render: listener 7 registered…

Eventually, things spiral out of control as hundreds and then thousands (and then millions) of callbacks are created, each running at different times within the span of a second, incrementing the time by 1.

The fix for this is already in the first example in this article – put the event listener in the useEffect hook, and make sure to pass an empty dependencies array ([]) as the second argument.

JavaScript
import { useEffect, useState } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); }, []); return ( <div> Seconds: {time} </div> ); }

useEffect runs after the first render and whenever any of the values in its dependencies array change, so passing an empty array makes it run only on the first render.

The time increases steadily, but by 2.
The time increases steadily, but by 2.

The time increases steadily now, but as you can see in the demo, it goes up by 2 seconds, instead of 1 second in our very first example. This is because in React 18 strict mode, all components mount, unmount, then mount again. so useEffect runs twice even with an empty dependencies array, creating two listeners that update the time by 1 every second.

We can fix this issue by turning off strict mode, but we’ll see a much better way to do so in the next section.

3. Not unregistering event handler on component unmount.

What happened here was a memory leak. We should have ensured that any created event listener is unregistered when the component unmounts. So when React 18 strict mode does the compulsory unmounting of the component, the first interval listener is unregistered before the second listener is registered when the component mounts again. Only the second listener will be left and the time will be updated correctly every second – by 1.

You can perform an action when the component unmounts by placing in the function useEffect optionally returns. So we use clearInterval to unregister the interval listener there.

JavaScript
import { useEffect, useState } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { console.log('here'); const timer = setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); // ๐Ÿ‘‡ Unregister interval listener return () => { clearInterval(timer); } }, []); return ( <div> Seconds: {time} </div> ); }

useEffect‘s cleanup function runs after every re-render, not only when the component unmounts. This prevents memory leaks that happen when an observable prop changes value without the observers in the component unsubscribing from the previous observable value.

Conclusion

Creating event listeners in React is pretty straightforward, you just need to be aware of these caveats, so you avoid unexpected errors and frustrating debugging spells. Avoid accessing stale state variables, don’t register more event listeners than required, and always unregister the event listener when the component unmounts.

Stop autosaving your code

Autosave has grown in popularity recently and become the default for many developers and teams, a must-have feature for various code editors. Apps like Visual Studio stubbornly refuse to fully provide the feature, and others make it optional. WebStorm, PHPStorm, and other JetBrains products have it enabled by default; for VSCode, you have to turn it on if you want it.

So obviously, we have two opposing views on the value of autosave because even though it can be highly beneficial, it has its downsides. In this article, weโ€™ll look at both sides of the autosave divide, good causes for turning it off and good causes not to.

Why you should stop autosaving your code

First, some reasons to think twice before enabling autosave in your code editor:

1. Higher and wasted resource usage

High VSCode CPU usage

When using tools that perform an expensive action any time the file is changed and saved, like build watchers, continuous testing tools, FTP client file syncers, etc, turning on autosave will make these actions much more often. They will also happen when there are errors in the file, and when you make a tiny change. It might instead be preferable for these tools to run only when they need to; when you reach a point where you really want to see the results of your changes.

With greater CPU and memory usage comes lower battery usage and more heat from higher CPU temperature. Admittedly, this will continue to become less and less of an issue as computers increase in processing power, memory capacity, and battery life across the board. But depending on your particular situation, you might want to conserve these things as much as possible.

2. Harder to recover from unexpected errors

Error output in the console.

With autosave enabled, any single change you make to your code file is written to disk, whether these changes leave your file in a valid state or not. This makes it harder to recover from unwanted changes.

What if you make an unintended and possibly buggy change, maybe from temporarily trying something out, and then close the file accidentally or unknowingly (autosave makes this more likely to happen)? With your Undo history wiped out, it will be harder to recover the previous working version of the file. You might even forget how the code used to look before the change, and then have to expend some mental effort to take the code back to what it was.

Git logo.

Of course, using version control tools like Git and Mercurial significantly decrease the chances of this happening. Still, the previous working version of the file you would want to recover could be one with uncommitted changes, not available from version control, especially if you don’t commit very frequently or you have a commit scheduling determined by more than just the code working after small changes, e.g., committing when a mini milestone is reached, committing after every successful build, etc.

So if you want to continue enjoying the benefits of auto-save while minimizing the possibility of this issue occurring, it’s best if you always use source control and have a frequent commit schedule.

3. No auto-formatting on save

VSCode "Format on Save" option

Many IDEs and text editors have a feature that automatically formats your code, so you can focus on the task at hand. For example, VSCode has built-in auto-formatting functionality, and also allows extensions to be written to provide more advanced or opinionated auto-formatters for various languages and file extensions.

These editors typically provide an option to format the file when it is saved. For manual saving, this makes sense, as usually you Ctrl/Cmd + S after making a small working change to a file and stop typing. This seems like a great point for formatting, so it’s a great idea to combine it with the saving action so there’s no need to think about it.

Prettier's format-on-save feature.

However, this feature isn’t very compatible with auto-save, and that’s why editors/IDEs like WebStorm and VSCode do not format your code for you on auto-save (you can still press Ctrl (Cmd) + S for it to happen, but isn’t one of the reasons for enabling auto-save to avoid this over-used keyboard shortcut?).

For one, it would probably be annoying for the cursor to change position due to auto-formatting as you’re typing. And then, there’s also the thing we already talked about earlier – the file won’t always be syntactically valid after an auto-save, and the auto-formatter will fail.

There is one way though, to have auto-formatting while still leaving auto save turned on, and that is enabling auto-formatting on commit. You can do this using Git pre-commit hooks provided by tools like Prettier and Husky.

Still only happens on commit though, so unless your code is not too messed up or you’re ready to format manually, you’ll have to endure the disorderliness until your next commit (or just press that Ctrl + S).

4. Can be distracting

If you have a tool in your project that performs an action when files are saved and indicate this visually in your editor, i.e, a pop-up notification to indicate recompilation, output in the terminal to indicate rebuilding, etc. With auto-save turned on, it can be a bit distracting for these actions to occur whenever you stop typing for a little while.

For instance, in this demo, notice how the terminal output in VSCode changes wildly from typing in a small bunch of characters:

The terminal output changes wildly from typing in a small bunch of characters.

Text editors have tried to fix this problem (and the resource usage problem too) by adding autosave delays; waiting a certain period of time since the file was last changed before actually committing the changes to disk.

This reduces the frequency at which the save-triggering actions occur and solves the issue to an extent, but it’s a trade-off as lack of immediate saving produces another non-ideal situation.

5. Auto-save is not immediate

The auto-save doesn't happen immediately.

Having an auto-save delay means that your code file will not be saved immediately. This can lead to some problems:

Data loss

Probably the biggest motivator for enabling auto-save is to reduce the likelihood that you’ll lose all the hard work you’ve put into creating code should an unexpected event like a system crash or the forced closing of the application occur. The higher your auto-save delay, the greater the chance of this data loss happening.

VSCode takes this into account; when its auto-save delay is set to 2 or more seconds, it will show the unsaved file indicator for a recently modified file, and the unsaved changes warning dialog if you try to close the file until the delay completes.

On-save action lags

Tools that run on save like build watchers will be held back by the auto-save delay. With manual save, you know that hitting Ctrl + S will make the watcher re-build immediately, but with delayed auto-save, you’ll have to experience the lag between your finishing and the watcher reacting to changes. This could impact the responsiveness of your workflow.

Why you should autosave your code

The reasons above probably won’t be enough to convince many devs to disable autosave. It is a fantastic feature after all. And now let’s look at some of the reasons why it’s so great to have:

1. No more Ctrl + S fatigue

Comic on Ctrl + S fatigue.
Image source: CommitStrip

If you use manual save, you probably press this keyboard shortcut hundreds or even thousands of times in a working day. Auto-saving helps you avoid this entirely. Even if you’re very used to it now, once you get used to your files being autosaved, you’ll be hesitant to back to the days of carrying out the ever-present chore of Ctrl + S.

Eradicating the need for Ctrl + S might even lower your chances of suffering from repetitive strain injury, as you no longer have to move your wrists and fingers over and over to type the key combination.

2. Save time and increase productivity

Save time photo.
Save icons created by Kiranshastry – Flaticon

The time you spend pressing the key combination to save a file might not seem like much, but it does add up over time. Turning auto-save on lets you use this time for more productive activities. Of course, if you just switched to auto-save, you’ll have to work on unlearning your Ctrl + S reflex for this to be a benefit to you.

3. Certainty of working with latest changes

Any good automation turns a chore into a background operation you no longer have to think about. This is what auto-save does to saving files; no longer are you unsure of whether you’re working with the most recent version of the file. Build watchers and other on-file-change tools automatically run after the file’s contents are modified, and display output associated with the latest file version.

4. Avoids errors due to file not being saved

Error output in the console.

This follows from the previous point. Debugging can be a tedious process and it’s not uncommon for developers to forget to save a file when tirelessly hunting for bugs. You probably don’t want to experience the frustration of scrutinizing your code, line after line, wondering how this particular bug can still exist after everything you’ve done.

You might think I’m exaggerating, but it might take up to 15 (20? 30??) minutes before you finally notice the unsaved file indicator. Especially if you’ve been trapped in a cycle of making small changes, saving, seeing the futility of your changes, making more small changes, saving… when you’re finally successful and pressing Ctrl + S is the only issue, you might just assume your change didn’t work, instead of checking for other possible reasons for the reoccurrence of the error.

5. Encourages smaller changes due to triggering errors faster

When a tool performs an action due to a file being saved, the new contents of the file might be invalid and trigger an error. For example, a test case might fail when a continuous testing tool re-runs or there might be a syntax error when a build watcher re-builds.

Since this type of on-file-change action occur more (possibly much more) when files are auto-saved when you type code that causes an error, it will take a shorter time for the action to happen and for you to be notified of the error. You would have made a smaller amount of code changes, which will make it easier to identify the source of the error.

Conclusion

Autosave is an amazing feature with the potential to significantly improve your quality of life as a developer when used properly. Still, it’s not without its disadvantages, and as we saw in this article, enabling or disabling it is a trade-off to live with. Choose auto-format on save and lower CPU usage, or choose to banish Ctrl + S forever and gain the certainty of working with up-to-date files.

What are your views concerning the autosave debate? Please let me know in the comments!

NEW: *Built-In* Syntax Highlighting on Medium

If you frequently read or write coding articles on Medium youโ€™ll know that it hasnโ€™t had any syntax highlighting support for years now, despite programming being one of the most common topics on the platform. Software writers have had to resort to third-party tools to produce beautiful code highlighting that enhances readability.

Luckily, all that should change soon, as recently the Medium team finally added built-in syntax highlighting support to the code block for major programming languages.

The Medium code block now has syntax highlighting support.
The Medium code block now has syntax highlighting support.

As you can see in the demo, the code block can now automatically detect the code’s language and highlight it.

Manual syntax highlighting

Auto-detection doesnโ€™t always work correctly though, especially for small code snippets, possibly due to the syntax similarities between multiple languages. Notice in the demo how the language detected changed during typing, from R to C++ to Go before arriving at JavaScript.

For tiny code snippets, auto-detection will likely fail:

Auto-detection fails to correctly detect the language.
Bash?

In such a case you can select the correct language from the drop-down list:

Manually setting the language for syntax highlighting.
Manually setting the language for syntax highlighting.

Remove syntax highlighting

If the code is of a language not listed or it doesn’t require highlighting, you can select None and remove the highlighting.

Removing syntax highlighting with the "None" option.
Removing syntax highlighting with the None option.

Note that syntax highlighting isn’t applied to articles published before the feature arrived, probably because it would produce incorrect results in them if auto-detection failed.

So now we no longer need GitHub Gists or Carbon for this. Syntax highlighting on Medium is now easier than ever before.

How Does the useDeferredValue Hook Work in React?

React now has concurrency support with the release of version 18. There are numerous features now that help to make better use of system resources and boost app performance. One such feature is the useDefferedValue hook, in this article we’re going to learn about useDeferredValue and understand the scenarios where we can use it.

Why do we need useDefferedValue?

Before we can see this hook in action, we need to understand something about how React manages state and updates the DOM.

Let’s say we have the following code:

App.js

export default function App() {
  const [name, setName] = useState('');

  const computedValue = useMemo(() => {
    return getComputedValue(name);
  }, [name]);

  const handleChange = (event) => {
    setName(event.target.value);
  };

  return (
    <input
      type="text"
      placeholder="Username"
      value={name}
      onChange={handleChange}
    />
  );
}

Here we create a state variable with the useState hook, and a computed value (computedValue) derived from the state. We use the useMemo hook to recalculate the computed value only when the state changes.

So when the value of the input field changes, the name state variable is updated and the computed value is recomputed before the DOM is updated.

This usually isn’t an issue, but sometimes this recalculation process involves a large amount of computation and takes a long time to finish executing. This can reduce performance and degrade the user experience.

For example, we could be developing a feature that lets a user search for an item in a gigantic list:

App.js

function App() {
  const [query, setQuery] = useState('');

  const list = useMemo(() => {
    // ๐Ÿ‘‡ Filtering through large list impacts performance
    return largeList.filter((item) => item.name.includes(query));
  }, [query]);

  const handleChange = (event) => {
    setQuery(event.target.value);
  };

  return (
    <>
      <input type="text" value={query} onChange={handleChange} placeholder="Search"/>
      {list.map((item) => (
        <SearchResultItem key={item.id} item={item} />
      ))}
    </>
  );
}

In this example, we have a query state variable used to filter through a huge list of items. The longer the list, the more time it will take for the filtering to finish and the list variable to be updated for the DOM update to complete.

So when the user types something in the input field, the filtering will cause a delay in the DOM update and it’ll take time for the text in the input to reflect what the user typed immediately. This slow feedback will have a negative effect on how responsive your app feels to your users.

I simulated the slowness in the demo below so you can better understand this problem. There are only a few search results for you to properly visualize it, and they’re each just the uppercase of whatever was typed into the input field.

In this demo, I am typing each character one after the other as quickly as I can, but because of the artificial slowness, it takes about a second for my keystroke to change the input text.

The input doesn't respond to keystrokes fast enough.
The input doesn’t respond to keystrokes fast enough.

useDeferredValue in action

This is a situation where the useDeferredValue hook is handy. useDeferredValue() accepts a state value as an argument and returns a copy of the value that will be deferred, i.e., when the state value is updated, the copy will not update accordingly until after the DOM has been updated to reflect the state change. This ensures that urgent updates happen and are not delayed by less critical, time-consuming ones.

function App() {
  const [query, setQuery] = useState('');

  // ๐Ÿ‘‡ useDefferedValue
  const deferredQuery = useDefferedValue(query);

  const list = useMemo(() => {
    return largeList.filter((item) => item.name.includes(query));
  }, [deferredQuery]);

  const handleChange = (event) => {
    setQuery(event.target.value);
  };

  return (
    <>
      <input type="text" value={query} onChange={handleChange} placeholder="Search" />
      {list.map((item) => (
        <SearchResultItem key={item.id} item={item} />
      ))}
    </>
  );
}

In the example above, our previous code has been modified to use the useDeferredValue hook. As before, the query state variable will be updated when the user types, but this time, useMemo won’t be invoked right away to filter the large list, because now deferredQuery is the dependency useMemo is watching for changes, and useDeferredValue ensures that deferredQuery will not be updated until after query has been updated and the component has been re-rendered.

Since useMemo won’t be called and hold up the DOM update from the change in the query state, the UI will be updated without delay and the input text will change once the user types. This solves the responsiveness issue.

After the query state is updated, then deferredQuery will be updated, causing useMemo to filter through the large list and recompute a value for the list variable, updating the list of items shown below the input field.

The input responds instantly to keystrokes.
The input field responds instantly to keystrokes.

As you can see in the demo, the text changes immediately as I type, but the list lags behind and updates sometime later.

If we keep changing the input field’s text in a short period (e.g., by typing fast), the deferredQuery state will remain unchanged and the list will not be updated. This is because the query state will keep changing before useDeferredValue can be updated, so useDeferredValue will continue to delay the update until it has time to set deferredQuery to the latest value of query and update the list.

Here’s what I mean:

Typing quickly prevents the list from updating right away.
Typing quickly prevents the list from updating right away.

This is quite similar to debouncing, as the list is not updated till a while after input has stopped.

Tip

Sometimes in our apps, we’ll want to perform an expensive action when an event occurs. If this event happens multiple times in a short period, the action will be done as many times, decreasing performance. To solve this, we can set a requirement that the action only be carried out “X” amount of time since the most recent occurrence of the event. This is called debouncing.

For example, in a sign-up form, instead of sending a request once the user types to check for a duplicate username in the database, we can make the request get sent only 500 ms since the user last typed in the username input field (or of course, we could perform this duplicate check after the user submits the form instead of near real-time).

Since the useDeferredValue hook defers updates and causes additional re-render, it’s important that you don’t overuse it as this could actually cause the performance problems we’re trying to avoid, as it forces React to do extra re-renders in your app. Use it only when you have critical updates that should happen as soon as possible without being slowed down by updates of lower priority.

Conclusion

The useDeferred value accepts a state variable and returns a copy of it that will not be updated until the component has been re-rendered from an update of the original state. This improves the performance and responsiveness of the app, as time-consuming updates are put off to a later time to make way for the critical ones that should be shown in the DOM without delay for the user to see.

JavaScript Intersection Observer: Everything You Need to Know

The Intersection Observer API is used to asynchronously observe changes in the intersection of an element with the browser’s viewport. It makes it easy to perform tasks that involve detecting the visibility of an element, or the relative visibility of two elements in relation to each other, without making the site sluggish and diminishing the user experience. We’re going to learn all about it in this article.

Uses of Intersection Observer

Before we start exploring the Intersection Observer API, let’s look at some common reasons to use it in our web apps:

Infinite scrolling

This is a web design technique where content is loaded continuously as the user scrolls down. It eliminates the need for pagination and can improve user dwell time.

Lazy loading

Lazy loading is a design pattern in which images or other content are loaded only when they are scrolled into the view of the user, to increase performance and save network resources.

Scroll-based animations

This means animating elements as the user scrolls up or down the page. Sometimes the animation plays completely once a certain scroll position is reached. Other times, the time in the animation changes as the scroll position changes.

Ad revenue calculation

We can use Intersection Observer to detect when an ad is visible to the user and record an impression, which affects ad revenue.

Creating an Intersection Observer

Let’s take a look at a simple use of an Intersection Observer in JavaScript.

index.js

const observer = new IntersectionObserver((entries) => {
  for (const entry of entries) {
    const intersecting = entry.isIntersecting;
    entry.target.style.backgroundColor = intersecting ? 'blue' : 'orange';
  }
});

const box = document.getElementById('box');

observer.observe(box);

The callback function receives an array containing objects of the IntersectionObserverEntry interface. This object contains intersection-related information about an element currently being watched by the Observer.

The callback is called whenever the target element intersects with the viewport. It is also called the first time the Observer is asked to watch the element.

We use the for...of loop to iterate through the entries passed to the callback. We’re observing only one element, so the entries array will contain just the IntersectionObserverEntry element that represents the box, and the for...of loop will have only one iteration.

The isIntersecting property of an IntersectionObserverEntry element returns a boolean value that indicates whether the element is intersecting with the viewport.

When isIntersection is true, it means that the element is transitioning from non-intersecting to intersecting. But when it is false, it indicates that the element is transitioning from intersecting to non-intersecting.

So we use the isIntersection property to set the color of the element to blue as it enters the viewport, and back to black as it leaves.

We call the observe() method on the IntersectionObserver object to make the Observer start watching the element for an intersection.

In the demo below, the white area with the scroll bar represents the viewport. The gray part indicates areas on the page that are outside the viewport and not normally visible in the browser.

Watch how the color of the box changes as soon as one single pixel of it enters the viewport:

The element changes color once a single pixel of it enters the viewport.
The element changes color once one single pixel of it enters the viewport.

Intersection Observer options

Apart from the callback function, the IntersectionObserver() constructor also accepts an options object we use to customize the conditions that must be met for the callback to be invoked.

threshold

The threshold property accepts a value between 0 and 1 that specifies the percentage of the element that must be visible within the viewport for the callback to be invoked. It has a value of 0 by default, meaning that the callback will be run once a single pixel of the element enters the viewport.

Let’s modify our previous example to use a threshold of 1 (100%):

index.js

const observer = new IntersectionObserver(
  (entries) => {
    for (const entry of entries) {
      const intersecting = entry.isIntersecting;
      entry.target.style.backgroundColor = intersecting ? 'blue' : 'black';
    }
  },
  // ๐Ÿ‘‡ Threshold is 100%
  { threshold: 1 }
);

const box = document.getElementById('box');

observer.observe(box);

Now the callback that changes the color will only be executed when every single pixel of the element is visible in the viewport.

The element changes color once every single pixel of it enters the viewport.
The element changes color once every single pixel of it enters the viewport.

threshold also accepts multiple values, which makes the callback get each time the element passes one of the threshold values set.

For example:

index.js

const threshold = document.getElementById('threshold');

const observer = new IntersectionObserver(
  (entries) => {
    for (const entry of entries) {
      const ratio = entry.intersectionRatio;
      threshold.innerText = `${Math.round(ratio * 100)}%`;
    }
  },
  // ๐Ÿ‘‡ Multiple treshold values
  { threshold: [0, 0.25, 0.5, 0.75, 1] }
);

const box = document.getElementById('box');

observer.observe(box);

We pass 5 percentage values in an array to the threshold property and display each value as the element reaches it. To do this we use the intersectionRatio property, a number between 0 and 1 indicating the current percentage of the element that is in the viewport.

The text is updated each time a percentage of the element in the viewport reaches a certain threshold.
The text is updated each time a percentage of the element in the viewport reaches a certain threshold.

Notice how the text shown doesn’t always match our thresholds, e.g., 2% was shown for the 0% threshold in the demo. This happens because when we scroll fast and reach a threshold, by the time the callback can fire to update the text, we have already scrolled in more of the element beyond the threshold.

If we scrolled more slowly the callback would have time to update the text before the element scrolls past the current threshold.

rootMargin

The rootMargin property applies a margin around the viewport or root element. It accepts values that the CSS margin property can accept e.g., 10px 20px 30px 40px (top, right, bottom, left). A margin grows or shrinks the region of the viewport that the Intersection Observer watches for an intersection with the target element.

Here’s an example of using the rootMargin property:

index.js

const observer = new IntersectionObserver(
  (entries) => {
    for (const entry of entries) {
      const intersecting = entry.isIntersecting;
      entry.target.style.backgroundColor = intersecting ? 'blue' : 'black';
    }
  },
  // ๐Ÿ‘‡ Root margin 50px from bottom of viewport
  { rootMargin: '50px' }
);

const box = document.getElementById('box');

observer.observe(box);

After setting a rootMargin of 50px, the viewport is effectively increased in height for intersection purposes, and the callback function will be invoked when the element comes within 50px of the viewport.

The red lines in the demo indicate the bounds of the region watched by the Observer for any intersection.

The element changes color when it comes within 50px of the viewport.
The element changes color when it comes within 50px of the viewport.

We can also specify negative margins to shrink the area of the viewport used for the intersection.

const observer = new IntersectionObserver(
  (entries) => {
    for (const entry of entries) {
      const intersecting = entry.isIntersecting;
      entry.target.style.backgroundColor = intersecting ? 'blue' : 'black';
    }
  },
  // ๐Ÿ‘‡ Negative margin
  { rootMargin: '-50px' }
);

const box = document.getElementById('box');

observer.observe(box);

Now the callback is fired when a single pixel of the element is more than 50px inside the viewport.

The element changes color when a single pixel of the element is more than 50px inside the viewport.
The element changes color when a single pixel of the element is more than 50px inside the viewport.

root

The root property accepts an element that must be an ancestor of the element being observed. By default, it is null, which means the viewport is used. You won’t need to use this property often, but it is handy when you have a scrollable container on your page that you want to check for intersections with one of its child elements.

For instance, to create the demos in this article, I set the root property to a scrollable container on the page, to make it easy for you to visualize the viewport and the areas outside it and gain a better understanding of how the intersection works.

Second callback parameter

The callback passed to the IntersectionObserver() constructor actually has two parameters. The first parameter is the entries parameter we looked at earlier. The second is simply the Observer that is watching for intersections.

const observer = new IntersectionObserver((entries, o) => {
  console.log(o === observer); // true
});

This parameter is useful for accessing the Observer from within the callback, especially in situations where the callback is in a location where the Observer variable can’t be accessed, e.g., in a different file from the one containing the Observer variable.

Preventing memory leaks

We need to stop observing elements when they no longer need to be observed, like when they are removed from the DOM or after one-time scroll-based animations, to prevent memory leaks or performance issues.

We can do this with the unobserve() method.

new IntersectionObserver((entries, observer) => {
  entries.forEach(entry => {
    if (entry.isIntersecting) {
      doAnim(entry.target);
      observer.unobserve(entry.target);
    }
  });
});

The unobserver() takes a single element as its argument and stops observing that element.

There is also the disconnect() method, which makes the Observer stop observing all elements.

Conclusion

Intersection Observer is a powerful JavaScript API for easily detecting when an element has intersected with the viewport or a parent element. It lets us implement lazy loading, scroll-based animations, infinite scrolling, and more, without causing performance issues and having to use complicated logic.