featured

7 amazing new JavaScript features in ES14 (ES2023)

The world’s most popular programming language just got a new update.

Ever since 2015, a new JavaScript version has come out every year with tons of powerful features to make life much easier, and 2023 has been no different. Let’s look at what ES14 has to offer.

1. Array toSorted() method

Sweet syntactic sugar.

ES14 comes with a new toSorted() method that makes it easier to sort an array and return a copy without mutation.

So instead of doing this:

JavaScript
const nums = [5, 2, 6, 3, 1, 7, 4]; const clone = [...nums]; const sorted = clone.sort(); console.log(sorted); // [1, 2, 3, 4, 5, 6, 7]

We now get to do this:

JavaScript
const nums = [5, 2, 6, 3, 1, 7, 4]; const sorted = nums.toSorted(); console.log(sorted); // [1, 2, 3, 4, 5, 6, 7] console.log(nums); // [5, 2, 6, 3, 1, 7, 4]

And just like sort(), toSorted() takes a callback function that lets you decide how the sort should happen – ascending or descending, alphabetical or numeric.

JavaScript
const nums = [5, 2, 6, 3, 1, 7, 4]; const sorted = nums.toSorted((a, b) => b - a); console.log(sorted); // [7, 6, 5, 4, 3, 2, 1]

2. Hashbang grammar

What’s a hashbang?

Basically, it’s a sequence of characters in a file that tells the Linux Shell what interpreter or engine it should use to run the file. For example:

codingbeauty.sh
#!/bin/bash echo "How are you all doing today?"

With this hashbang, we can now easily run codingbeauty.sh directly in the shell, after using the chmod command to make the script executable.

Hashbangs also let us hide all the juicy implementation details in our scripts and indicate specific interpreter versions to execute the script files.

So with ES14, we can now do this effortlessly in JavaScript files, like this:

codingbeauty.js
#!/usr/bin/env node console.log("I'm doing great, how are you?");

3. Array toSpliced() method

Those immutability purists out there will no doubt be pleased with all these new Array methods.

toSorted() is to sort() as toSpliced() is to splice():

JavaScript
const colors = ['red', 'green', 'blue', 'yellow', 'pink']; const spliced = colors.toSpliced(1, 2, 'gray', 'white'); console.log(spliced); // [ 'red', 'gray', 'white', 'yellow', 'pink' ] // Original not modified console.log(colors); // ['red', 'green', 'blue', 'yellow', 'pink'];

Here toSpliced() removes 2 of the array elements, starting from index 1, and inserts 'gray' and 'white' in their place.

4. Symbols as WeakMap keys

WeakMaps; not very popular, are they?

Well, they’re a lot like Maps, except their keys can only contain non-primitive objects – no strings or numbers allowed here. These keys are stored as weak references, meaning the JavaScript engine can carry out garbage collection on the objects when it needs to if there is no other reference to the objects in memory apart from the keys.

One powerful use of WeakMaps is custom caching: by using objects as keys, you can associate cached values with specific objects. When the objects are garbage collected, the corresponding WeakMap entries are automatically removed, clearing the cache immediately.

JavaScript
const map = new Map(); const weakMap = new WeakMap(); const obj1 = { name: 'React' }; const obj2 = { name: 'Angular' }; map.set(obj1, 'Value for obj1 at Coding Beauty'); weakMap.set(obj2, 'Value for obj2 at Coding Beauty'); console.log(map.get(obj1)); // Output: Value for obj1 console.log(weakMap.get(obj2)); // Output: Value for obj2 obj1 = null; obj2 = null; console.log(map.get(obj1)); // Output: Value for obj1 console.log(weakMap.get(obj2)); // Output: undefined (obj2 has been garbage collected)

So ES14 makes it possible to define JavaScript Symbols as keys. This can make the role a key-value pair plays in a WeakMap clearer.

JavaScript
let mySymbol = Symbol('mySymbol'); let myWeakMap = new WeakMap(); let obj = { name: 'Coding Beauty' }; myWeakMap.set(mySymbol, obj); console.log(myWeakMap.get(mySymbol)); // Output: object

And what are they meant for?

5. Array toReversed() method

Another new Array method to promote immutability and functional programming.

The name’s self-explanatory: give me the reversed version of my array.

Before – with reverse().

JavaScript
const arr = [5, 4, 3, 2, 1] const reversed = arr.reverse(); console.log(reversed); // [1, 2, 3, 4, 5] // Original modified console.log(arr); // [1, 2, 3, 4, 5]

Now:

JavaScript
const arr = [5, 4, 3, 2, 1] const reversed = arr.toReversed(); console.log(reversed); // [5, 4, 3, 2, 1] // Original NOT modified console.log(arr); // [1, 2, 3, 4, 5]

6. Array find from last

Sure we can already use the Array find() method to find an element in an array that passes a specified test condition. And findIndex() will give us the index of such an element.

But find() and findIndex() both start searching from the first element of the array. What if it’ll be better for us to search from the last element instead?

Here we’re trying to get the item in the array with the value prop equal to y. With find() and findIndex():

JavaScript
const letters = [ { value: 'v' }, { value: 'w' }, { value: 'x' }, { value: 'y' }, { value: 'z' }, ]; const found = letters.find((item) => item.value === 'y'); const foundIndex = letters.findIndex((item) => item.value === 'y'); console.log(found); // { value: 'y' } console.log(foundIndex); // 3

This works, but as the target object is closer to the tail of the array, we could make this program run faster if we use the new ES2022 findLast() and findLastIndex() methods to search the array from the end.

JavaScript
const letters = [ { value: 'v' }, { value: 'w' }, { value: 'x' }, { value: 'y' }, { value: 'z' }, ]; const found = letters.findLast((item) => item.value === 'y'); const foundIndex = letters.findLastIndex((item) => item.value === 'y'); console.log(found); // { value: 'y' } console.log(foundIndex); // 3

Another use case might require that we specifically search the array from the end to get the correct item. For example, if we want to find the last even number in a list of numbers, find() and findIndex() would produce a totally wrong result.

JavaScript
const nums = [7, 14, 3, 8, 10, 9]; // gives 14, instead of 10 const lastEven = nums.find((value) => value % 2 === 0); // gives 1, instead of 4 const lastEvenIndex = nums.findIndex((value) => value % 2 === 0); console.log(lastEven); // 14 console.log(lastEvenIndex); // 1

Yes, we could call the reverse() method on the array to reverse the order of the elements before calling find() and findIndex().

But this approach would cause unnecessary mutation of the array, as reverse() reverses the elements of an array in place. The only way to avoid this mutation would be to make a new copy of the entire array, which could cause performance problems for large arrays.

And this beside the fact that findIndex() would still not work on the reversed array, as reversing the elements would also mean changing the indexes they had in the original array. To get the original index, we would need to perform an additional calculation, which means writing more code.

JavaScript
const nums = [7, 14, 3, 8, 10, 9]; // Copying the entire array with the spread syntax before // calling reverse() const reversed = [...nums].reverse(); // correctly gives 10 const lastEven = reversed.find((value) => value % 2 === 0); // gives 1, instead of 4 const reversedIndex = reversed.findIndex((value) => value % 2 === 0); // Need to re-calculate to get original index const lastEvenIndex = reversed.length - 1 - reversedIndex; console.log(lastEven); // 10 console.log(reversedIndex); // 1 console.log(lastEvenIndex); // 4

It’s in cases like where the findLast() and findLastIndex() methods come in handy.

JavaScript
const nums = [7, 14, 3, 8, 10, 9]; const lastEven = nums.findLast((num) => num % 2 === 0); const lastEvenIndex = nums.findLastIndex((num) => num % 2 === 0); console.log(lastEven); // 10 console.log(lastEvenIndex); // 4

This code is shorter and more readable. Most importantly, it produces the correct result.

7. Array with() method

Unlike the others, with() has no complementary mutating method. But once you see it in action, you’ll know that it’s the immutable approach to changing a single element.

In so many languages, you typically modify a single array element like this:

JavaScript
const arr = [5, 4, 7, 2, 1] // Mutates array to change element arr[2] = 3; console.log(arr); // [5, 4, 3, 2, 1]

But see what we can do now, in ES2023 JavaScript:

JavaScript
const arr = [5, 4, 7, 2, 1]; const replaced = arr.with(2, 3); console.log(replaced); // [5, 4, 3, 2, 1] // Original not modified console.log(arr); // [5, 4, 7, 2, 1]

Final thoughts

This year was all about easier functional programming and immutability.

Indeed, the reliability and consistency that immutability brings cannot be overstated. With the rise of declarative frameworks like React and Vue as well as Redux and other libraries, we’ve seen immutable JavaScript array methods explode in popularity; it’s only natural that we see more and more of them come baked into the language as it matures.

They stole the show in 2023, just like JavaScript took over a huge chunk of the language ecosystem a while back, with many millions of developers keeping that fire burning today. Let’s see what the future holds.

One-liners: far more than just one line

Here’s one:

JavaScript
const groupBy = (arr, groupFn) => arr.reduce( (grouped, obj) => ({ ...grouped, [groupFn(obj)]: [...(grouped[groupFn(obj)] || []), obj], }), {} );

and another:

JavaScript
const randomHexColor = () => `#${Math.random().toString(16).slice(2, 8).padEnd(6, '0')}`; console.log(randomHexColor()); // #7a10ba (varies) console.log(randomHexColor()); // #65abdc (varies)

Indeed, they are impressive when done right; a nice way to show off language mastery.

But exactly is a one-liner? Is it really code that takes only one line? If so, then can’t every piece of code qualify as a one-liner, if we just remove all the newline characters? Think about it.

A minified JavaScript file with only one line of code.

It seems like we need a more rigorous definition of what qualifies as a one-liner. And, after a few minutes of thought when writing a previous article on one-liners, I came up with this:

A one-liner is a code solution to a problem, implemented with a single statement in a particular programming language, optionally using only first-party utilities.

Tari Ibaba (😎)

You can clearly see those particular keywords that set this definition apart from others

1. “…single statement…”

Single line or single statement? I go with the later.

Because the thing is, we squeeze every program ever made in a single line of code if we wanted; all the whitespace and file separation is only for us and our fellow developers.

If you’ve used Uglify.js or a similar minifier, you know what it does to all those pitiful lines of code; why it’s called Uglify.

Uglify changes this:

JavaScript
/** Obviously redudant comments here. Just meant to emphasize what Uglify does Class Person */ class Person { /** Constructor */ constructor(name, age) { this.name = name; this.age = age; } /** Print Message */ printMessage() { console.log(`Hello! My name is ${this.name} and I am ${this.age} years old.`); } } /** Creating Object */ var person = new Person('John Doe', 25); /** Printing Message */ person.printMessage();

to this:

JavaScript
class Person{constructor(e,n){this.name=e,this.age=n}printMessage(){console.log(`Hello! My name is ${this.name} and I am ${this.age} years old.`)}}var person=new Person("John Doe",25);person.printMessage();

Would you be impressed by someone who actually wrote code like in this minified way? I would say it’s just badly formatted code.

Would you call this a one-liner?

JavaScript
const sum = (a, b) => { const s1 = a * a; const s2 = b * b; return s1 + s2; }

Tools like the VS Code Prettier extension will easily split up those 3 statements into multiple lines:

The three statements are separated after VS Code Prettier auto-format on save.
The three statements are separated after VS Code Prettier auto-format on save.

A true one-liner way to get the sum of two squares would be something like this:

JavaScript
const sum = (a, b) => a * a + b * b;

One short, succinct statement does the same job with equal clarity.

On the other hand, what about code that spans multiple lines but only uses one statement? Like this:

JavaScript
const capitalizeWithoutSpaces = (str) => str .split('') .filter((char) => char.trim()) .map((char) => char.toUpperCase()) .join('');

I would say this function’s body is far more qualified to be a one-line than the two examples we saw above; single statement.

“…particular programming language”

We need this part because of abstraction.

int sum(int a, int b) { return a * a + b * b; }

This is a one-liner, isn’t it? Very harmless-looking and easy to understand.

How about now? :

sum(int, int): push rbp mov rbp, rsp mov DWORD PTR [rbp-4], edi mov DWORD PTR [rbp-8], esi mov eax, DWORD PTR [rbp-4] imul eax, eax mov edx, eax mov eax, DWORD PTR [rbp-8] imul eax, eax add eax, edx pop rbp ret

Wait, this is just another piece of code, right? Well, it is, except that they do exactly the same thing, but in different languages. One in C++, and the other in Assembly.

Now imagine how much more an equivalent machine language program will have. Clearly, we can only our sum() a one-liner function in the context of C++.

“…using only first-party utilities”

Once again, we need this part because of abstraction.

For us to consider a piece of code as a one-liner, it should only use built-in functions and methods that are part of the language’s standard library or core functionality. For example: array methods, the http module in Node.js, the os module in Python, and so on.

Without this, capitalizeWithoutSpaces() below would easily pass as a JavaScript one-liner:

JavaScript
// Not a one-liner const capitalizeWithoutSpaces = (str) => filter(str.split(''), (char) => char.trim()) .map((char) => char.toUpperCase()) .join(''); function filter(arr, callback) { // Look at all these lines const result = []; for (const item of arr) { if (callback(item)) { result.push(item); } } return result; }

filter could have contained many thousands of lines, yet capitalizeWithoutSpaces would still be given one-liner status.

It’s kind of controversial because a lot of these so-called first-party utilities are abstractions themselves with logic spanning dozens of lines. But just like the “single statement” specifier, it makes it impossible to have an unlimited number of one-liners.

Final thoughts

The essence of a one-liner in programming extends beyond the literal interpretation of its name. It lies not only in the minimalism of physical lines but also in the elegance and sophistication of its execution. It often requires a sound comprehension of the language at hand, an ability to concisely solve a problem, and the art of utilizing the language’s core functionalities with finesse.

A one-liner isn’t merely about squeezing code into a single line; It is where the clarity of thought, the elegance of language mastery, and the succinctness of execution converge. It’s the realm where brevity meets brilliance and the art of coding truly shines.

Mojo: 7 brilliant Python upgrades in the new AI language

It is 35,000 times faster than Python. It is quicker than C. It is as easy as Python.

Enter Mojo: a newly released programming language made for AI developers and made by Modular, a company founded by Chris Lattner, the original creator of Swift.

This 35000x claim came from a benchmark comparison between Mojo and other languages, using the Mandelbrot algorithm on a particular AWS instance.
This 35000x claim came from a benchmark comparison between Mojo and other languages, using the Mandelbrot algorithm on a particular AWS instance.

It’s a superset of Python, combining Python’s usability, simplicity, and versatility with C’s incredible performance.

If you’re passionate about AI and already have a grasp on Python, then Mojo is definitely worth a try. So, let’s dive in and explore 7 powerful features of this exciting language together.

Mojo’s features

I signed up for Mojo access shortly after it was announced and got access a few days later.

I got access to the Mojo playground.

I started exploring all the cool new features they had to offer and even had the chance to run some code and see the language in action. Here are 7 interesting Python upgrades I found:

1. let and var declarations

Mojo introduces new let and var statements that let us create variables.

If we like we can specify a type like Int or String for the variable, as we do in TypeScript. var allows variables to change; let doesn’t. So it’s not like JavaScript’s let and var – There’s no hoisting for var and let is constant.

Mojo
def your_function(a, b): let c = a # Uncomment to see an error: # c = b # error: c is immutable if c != b: let d = b print(d) your_function(2, 3)

2. structs for faster abstraction

We have them in C++, Go, and more.

Structs are a Mojo feature similar to Python classes, but they’re different because Mojo classes are static: you can’t add more methods are runtime. This is a trade-off, as it’s less flexible, but faster.

Mojo
struct MyPair: var first: Int var second: Int # We use 'fn' instead of 'def' here - we'll explain that soon fn __init__(inout self, first: Int, second: Int): self.first = first self.second = second fn __lt__(self, rhs: MyPair) -> Bool: return self.first < rhs.first or (self.first == rhs.first and self.second < rhs.second)

Here’s one way struct is stricter than class: all fields must be explicitly defined:

Fields must be explicitly defined in Mojo structs.

3. Strong type checking

These structs don’t just give us flexibility, they let us check variable types at compile-time in Mojo, like the TypeScript compiler does.

Mojo
def pairTest() -> Bool: let p = MyPair(1, 2) # Uncomment to see an error: # return p < 4 # gives a compile time error return True

The 4 is an Int, the p is a MyPair; Mojo simply can’t allow this comparison.

4. Method overloading

C++, Java, Swift, etc. have these.

Function overloading is when there are multiple functions with the same name that accept parameters with different data types.

Look at this:

Mojo
struct Complex: var re: F32 var im: F32 fn __init__(inout self, x: F32): """Makes a complex number from a real number.""" self.re = x self.im = 0.0 fn __init__(inout self, r: F32, i: F32): """Makes a complex number from its real and imaginary parts.""" self.re = r self.im = i

Typeless languages like JavaScript and Python simply can’t have function overloads, for obvious reasons.

Although overloading is allowed in module/file functions and class methods based on parameter/type, it won’t work based on return type alone, and your function arguments need to have types. If don’t do this, overloading won’t work; all that’ll happen is the most recently defined function will overwrite all those previously defined functions with the same name.

5. Easy integration with Python modules

Having seamless Python support is Mojo’s biggest selling point by far.

And using Python modules in Mojo is straightforward. As a superset, all you need to do is call the Python.import_module() method, with the module name.

Here I’m importing numpy, one of the most popular Python libraries in the world.

Mojo
from PythonInterface import Python # Think of this as `import numpy as np` in Python let np = Python.import_module("numpy") # Now it's like you're using numpy in Python array = np.array([1, 2, 3]) print(array)

You can do the same for any Python module; the one limitation is that you have to import the whole module to access individual members.

All the Python modules will run 35,000 times faster in Mojo.

6. fn definitions

fn is basically def with stricter rules.

def is flexible, mutable, Python-friendly; fn is constant, stable, and Python-enriching. It’s like JavaScript’s strict mode, but just for def.

Mojo
struct MyPair: fn __init__(inout self, first: Int, second: Int): self.first = first self.second = second

fn‘s rules:

  • Immutable arguments: Arguments are immutable by default – including self – so you can’t mistakenly mutate them.
  • Required argument types: You have to specify types for its arguments.
  • Required variable declarations: You must declare local variables in the fn before using them (with let and var of course).
  • Explicit exception declaration: If the fn throws exceptions, you must explicitly indicate so – like we do in Java with the throws keyword.

7. Mutable and immutable function arguments

Pass-by-value vs pass-by-reference.

You may have across this concept in languages like C++.

Python’s def function uses pass-by-reference, just like in JavaScript; you can mutate objects passed as arguments inside the def. But Mojo’s def uses pass-by-value, so what you get inside a def is a copy of the passed object. So you can mutate that copy all you want; the changes won’t affect the main object.

Pass-by-reference improves memory efficiency as we don’t have to make a copy of the object for the function.

But what about the new fn function? Like Python’s def, it uses pass-by-reference by default, but a key difference is that those references are immutable. So we can read the original object in the function, but we can’t mutate it.

Immutable arguments

borrowed a fresh, new, redundant keyword in Mojo.

Because what borrowed does is to make arguments in a Mojo fn function immutable – which they are by default. This is invaluable when dealing with objects that take up a substantial amount of memory, or we’re not allowed to make a copy of the object we’re passing.

For example:

Mojo
fn use_something_big(borrowed a: SomethingBig, b: SomethingBig): """'a' and 'b' are both immutable, because 'borrowed' is the default.""" a.print_id() // 10 b.print_id() // 20 let a = SomethingBig(10) let b = SomethingBig(20) use_something_big(a, b)

Instead of making a copy of the huge SomethingBig object in the fn function, we simply pass a reference as an immutable argument.

Mutable arguments

If we want mutable arguments instead, we’ll use the new inout keyword instead:

Mojo
struct Car: var id_number: Int var color: String fn __init__(inout self, id: Int): self.id_number = id self.color = 'none' # self is passed by-reference for mutation as described above. fn set_color(inout self, color: String): self.color = color # Arguments like self are passed as borrowed by default. fn print_id(self): # Same as: fn print_id(borrowed self): print('Id: {0}, color: {1}') car = Car(11) car.set_color('red') # No error

self is immutable in fn functions, so we here we needed inout to modify the color field in set_color.

Key takeaways

  • Mojo: is a new AI programming language that has the speed of C, and the simplicity of Python.
  • let and var declarations: Mojo introduces let and var statements for creating optionally typed variables. var variables are mutable, let variables are not.
  • Structs: Mojo features static structs, similar to Python classes but faster due to their immutability.
  • Strong type checking: Mojo supports compile-time type checking, akin to TypeScript.
  • Method overloading: Mojo allows function overloading, where functions with the same name can accept different data types.
  • Python module integration: Mojo offers seamless Python support, running Python modules significantly faster.
  • fn definitions: The fn keyword in Mojo is a stricter version of Python’s def, requiring immutable arguments and explicit exception declaration.
  • Mutable and immutable arguments: Mojo introduces mutable (inout) and immutable (borrowed) function arguments.

Final thoughts

As we witness the unveiling of Mojo, it’s intriguing to think how this new AI-focused language might revolutionize the programming realm. Bridging the performance gap with the ease-of-use Python offers, and introducing powerful features like strong type checking, might herald a new era in AI development. Let’s embrace this shift with curiosity and eagerness to exploit the full potential of Mojo.

Why “Yarn 2” is actually Yarn 3

What do we know as Yarn 2?

It’s the modern version of Yarn that comes with important upgrades to the package manager including PNMP-style symlinks, and an innovative new Plug ‘n’ Play module installation method for much-reduced project sizes and rapid installations.

But after migrating from Yarn 1, you’ll find something interesting, as I did – the thing widely known as Yarn 2 is actually… version 3?

After migrating to "Yarn 2" and checking the version, it was shown to be version 3.

Why is “Yarn 2” using version 3?

It’s because Yarn 1 served as the initial codebase which was completely overhauled in the Yarn v2.0 (the actual version 2), enhancing its efficiency and effectiveness, with its launch taking place in January 2020. As time moved on, the introduction of a fresh major, Yarn v3.0, happened, thankfully without the need for another codebase rewrite. The upcoming major update is expected to be Yarn v4.0, and so on.

Despite the historical tendency of releasing few major updates, there was a growing trend among some individuals to label everything that used the new codebase as “Yarn 2”, which includes Yarn 2.x versions and future ones such as 3.x. This, however, was a misinterpretation as “Yarn 2” strictly refers to the 2.x versions. A more accurate way to reference the new codebase would be “Yarn 2+” or “Yarn Berry” – a codename that the team selected for the new codebase when they started developing it.

As once stated by one of the maintainers in a related GitHub discussion:

Some people have started to colloquially call “Yarn 2” everything using this new codebase, so Yarn 2.x and beyond (including 3.x). This is incorrect though (“Yarn 2” is really just 2.x), and a better term to refer to the new codebase would be Yarn 2+, or Yarn Berry (which is the codename I picked for the new codebase when I started working on it).

arcanis, a Yarn maintainer

How to migrate from Yarn v1 to Yarn Berry

A Yarn Berry installation in progress.
A Yarn Berry installation in progress.

If you’re still using Yarn version 1 – or worse, NPM – you’re missing out.

The new Yarn is loaded with a sizable number of upgrades that will significantly improve your developer experience when you start using it. These range from notable improvements in stability, flexibility, and extensibility, to brand new features, like Constraints.

You can migrate from Yarn v1 to Yarn Berry in 7 easy steps:

  1. Make sure you’re using Node version 18+.
  2. Run corepack enable to activate Corepack.
  3. Navigate to your project directory.
  4. Run yarn set version berry.
  5. Convert your .npmrc and .yarnrc files into .yarnrc.yml (as explained here).
  6. Run yarn install to migrate the lockfile.
  7. Commit all changes.

In case you experience any issues due to breaking changes, this official Yarn Berry migration guide should help.

Final thoughts

The Yarn versioning saga teaches us an important lesson: terminology matters.

What many of us dub as “Yarn 2” is actually “Yarn 2+” or “Yarn Berry”, the game-changing codebase. This misnomer emphasizes our need to stay current, not just with evolving tools and features, but with their rightful names as well. After all, how we understand and converse about these improvements shapes our effectiveness and fluency as developers.

Fine-tuning for OpenAI’s GPT-3.5 Turbo model is finally here

Some great news lately for AI developers from OpenAI.

Finally, you can now fine-tune the GPT-3.5 Turbo model using your own data. This gives you the ability to create customized versions of the OpenAI model that perform incredibly well at specific tasks and give responses in a customized format and tone, perfect for your use case.

For example, we can use fine-tuning to ensure that our model always responds in a JSON format, containing Spanish, with a friendly, informal tone. Or we could make a model that only gives one out of a finite set of responses, e.g., rating customer reviews as critical, positive, or neutral, according to how *we* define these terms.

As stated by OpenAI, early testers have successfully used fine-tuning in various areas, such as being able to:

  • Make the model output results in a more consistent and reliable format.
  • Match a specific brand’s style and messaging.
  • Improve how well the model follows instructions.

The company also claims that fine-tuned GPT-3.5 Turbo models can match and even exceed the capabilities of base GPT-4 for certain tasks.

Before now, fine-tuning was only possible with weaker, costlier GPT-3 models, like davinci-002 and babbage-002. Providing custom data for a GPT-3.5 Turbo model was only possible with techniques like few-shot prompting and vector embedding.

OpenAI also assures that any data used for fine-tuning any of their models belongs to the customer, and then don’t use it to train their models.

What is GPT-3.5 Turbo, anyway?

Launched earlier this year, GPT-3.5 Turbo is a model range that OpenAI introduced, stating that it is perfect for applications that do not solely focus on chat. It boasts the capability to manage 4,000 tokens at once, a figure that is twice the capacity of the preceding model. The company highlighted that preliminary users successfully shortened their prompts by 90% after applying fine-tuning on the GPT-3.5 Turbo model.

What can I use GPT-3.5 Turbo fine-tuning for?

  • Customer service automation: We can use a fine-tuned GPT model to make virtual customer service agents or chatbots that deliver responses in line with the brand’s tone and messaging.
  • Content generation: The model can be used for generating marketing content, blog posts, or social media posts. The fine-tuning would allow the model to generate content in a brand-specific style according to prompts given.
  • Code generation & auto-completion: In software development, such a model can provide developers with code suggestions and autocompletion to boost their productivity and get coding done faster.
  • Translation: We can use a fine-tuned GPT model for translation tasks, converting text from one language to another with greater precision. For example, the model can be tuned to follow specific grammatical and syntactical rules of different languages, which can lead to higher accuracy translations.
  • Text summarization: We can apply the model in summarizing lengthy texts such as articles, reports, or books. After fine-tuning, it can consistently output summaries that capture the key points and ideas without distorting the original meaning. This could be particularly useful for educational platforms, news services, or any scenario where digesting large amounts of information quickly is crucial.

How much will GPT-3.5 Turbo fine-tuning cost?

There’s the cost of fine-tuning and then the actual usage cost.

  • Training: $0.008 / 1K tokens
  • Usage input: $0.012 / 1K tokens
  • Usage output: $0.016 / 1K tokens

For example, a gpt-3.5-turbo fine-tuning job with a training file of 100,000 tokens that is trained for 3 epochs would have an expected cost of $2.40.

OpenAI, GPT 3.5 Turbo fine-tuning and API updates

When will fine-tuning for GPT-4 be available?

This fall.

OpenAI has announced that support for fine-tuning GPT-4, its most recent version of the large language model, is expected to be available later this year, probably during the fall season. This upgraded model has been proven to perform at par with humans across diverse professional and academic benchmarks. It surpasses GPT-3.5 in terms of reliability, creativity, and its capacity to deal with instructions that are more nuanced.

10 powerful JavaScript animation libraries for engaging user experiences

Animations. A fantastic way to stand out from the crowd and grab the attention of your visitors.

With creative object motion and fluid page transitions, you not only add a unique aesthetic appeal to your website but also enhance user engagement and create a memorable first impression.

And creating animations can’t get any easier with these 10 powerful JavaScript libraries. Scroll animations, handwriting animations, SPA page transitions, typing animations, color animations, SVG animations… they are endlessly capable. They are the best.

1. Anime.js

An animation creating with Anime.js
An animation created with Anime.js.

With over 43k stars on GitHub, Anime.js is easily one of the most popular animation libraries out there.

It’s a lightweight JavaScript animation library with a simple API that can be used to animate CSS properties, SVG, DOM attributes, and JavaScript objects. With Anime.js, you can play, pause, restart or reverse an animation. The library also provides staggering features for animating multiple elements with follow-through and overlapping actions. There are various animation-related events also included, which we can listen to using callbacks and Promises.

Visit the Anime.js website

2. Lottie

An animation created with Lottie.js
An animation created with Lottie.

Lottie is a library that parses Adobe After Effects animations exported as JSON with the Bodymovin plugin and renders them natively on mobile and web applications. This eliminates the need to manually recreate the advanced animations created in After Effects by expert designers. The Web version alone has over 27k stars on GitHub.

Visit the Lottie website

3. Velocity

An animation created with Velocity.
An animation created with Velocity.

With Velocity you create color animations, transforms, loops, easings, SVG animations, and more. It uses the same API as the $.animate() method from the jQuery library, and it can integrate with jQuery if it is available. The library provides fade, scroll, and slide effects. Besides being able to control the duration and delay of an animation, you can reverse it sometime after it has been completed, or stop it altogether when it is in progress. It has over 17k stars on GitHub and is a good alternative to Anime.js.

Visit the Velocity website

4. Rough Notation

Som Rough Notation annotation styles.
Some Rough Notation annotation styles.

Rough Notation is a JavaScript library for creating and animating colorful annotations on a web page. It uses RoughJS to create a hand-drawn look and feel. You can create several annotation styles, including underline, box, circle, highlight, strike-through, etc., and control the duration and color of each annotation style.

Visit the Rough Notation website

5. Popmotion

An animation created with Popmotion.
An animation created with Popmotion.

Popmotion is a functional library for creating prominent and attention-grabbing animations. What makes it stand out? – there are zero assumptions about the object properties you intend to animate, but instead provides simple, composable functions that can be used in any JavaScript environment.

The library supports keyframes, spring and inertia animations on numbers, colors, and complex strings. It is well-tested, actively maintained, and has over 19k stars on GitHub.

Visit the Popmotion website

6. Vivus

An animation created with Vivus.
An animation created with Vivus.

Vivus is a JavaScript library that allows you to animate SVGs, giving them the appearance of being drawn. It is fast and lightweight with exactly zero dependencies, and provides three different ways to animate SVGs: Delayed, Sync, and OneByOne. You can also use a custom script to draw an SVG in your preferred way.

Vivus also allows you to customize the duration, delay, timing function, and other animation settings. Check out Vivus Instant for live, hands-on examples.

Visit the Vivus website

7. GreenSock Animation Platform (GSAP)

An animation created with GSAP

The GreenSock Animation Platform (GSAP) is a library that lets you create wonderful animations that work across all major browsers. You can use it in React, Vue, WebGL, and the HTML canvas to animate colors, strings, motion paths, and more. It also comes with a ScrollTrigger plugin that lets you create impressive scroll-based animations with little code.

Used in over 11 million sites, with over 15k stars on GitHub, it is a versatile and popular indeed. You can use the GSDevTools from GreenSock to easily debug animations created with GSAP.

Visit the GSAP website

8. Three.js

An animation created with Three.js
An animation created with Three.js

Three.js is a lightweight library for displaying complex 3D objects and animations. It makes use of WebGL, SVG, and CSS3D renderers to create engaging three-dimensional experiences that work across a wide range of browsers and devices. It is a well-known library in the JavaScript community, with over 85k stars on GitHub.

Visit Three.js website

9. ScrollReveal

ScrollReveal animations.
ScrollReveal animations.

The ScrollReveal library lets you easily animate a DOM element as it enters or leaves the browser viewport. It provides various types of elegant effects to reveal or hide an element on-scroll in multiple browsers. And quite easy to use too, with with zero dependencies and over 21k stars on GitHub.

Visit the ScrollReveal website

10. Barba.js

Page transitions created with Barba.js.
Page transitions created with Barba.js.

One creative way to make your website outstanding is to add lively transitions between the pages as your users navigate between them. This produces a better user experience than simply displaying the new webpage or reloading the browser.

And that’s why Barba.js is so useful; this library lets you create enjoyable page transitions by making the site run like a Single Page Application (SPA). It reduces the delay between pages and minimizes the number of HTTP requests that the browser makes. It’s gotten almost 11k stars on GitHub.

Visit Barba.js website

Bonus

11. Mo.js

An animation created with Mo.js
An animation created with Mo.js.

Great library for creating compelling motion graphics.

It provides simple, declarative APIs for effortlessly creating smooth animations and effects that look great on devices of various screen sizes. You can move HTML or SVG DOM elements, or you can create a special Mo.js object, which comes with a set of unique capabilities. It is a reliable and well-tested library, with over 1500 tests written and over 17k stars on GitHub.

Visit the Mo.js website

12. Typed.js

An animation created with Typed.js

The name says it all; an animated typing library.

It types out a specific string character by character as if someone was typing in real-time, allowing you pause the typing speed, and even pause the typing for a specific amount of time. With smart backspacing, it types out successive strings starting with the same set of characters as the current one without backspacing the entire preceding string – as we saw in the demo above.

Also included is support for bulk typing, which types out a group of characters on the screen at the same time, instead of one after the other. Typed.js has over 12k stars on GitHub and is trusted by Slack and Envato.

Visit the Typed.js website

Final thoughts

The world of web animation is vast and dynamic, constantly evolving with the advent of new technologies and libraries. The animation libraries highlighted in this article offer an array of features to create engaging, interactive, and visually appealing experiences for users. They are a testament to the power and flexibility of JavaScript, and demonstrate how animations greatly enhance the user experience.

As a developer, harnessing these tools will no doubt elevate your projects, making them stand out in an increasingly competitive digital landscape.

This new ES7 feature made my math 3 times easier

But 5 lines of Java is one line of Python.

How many times have you heard something like that from lovers of the later?

Seems like they love to trash languages they stubbornly believe are verbose. I came to see that “Pythonic” is something truly cherished by our friends in the Python community.

Your Python code works, and so what? Where is elegance? Where is readability?

Think you can write a simple for loop and get away with it?

Python
total = 0 for i in range(1, 11): total += i print("The sum of the first 10 numbers is:", total)

Just wait till one of them find out — to say you’ll face severe criticism is an understatement.

Because apparently — and I kind of agree — it’s just not “beautiful” or concise enough.

To be “Pythonic” is best.

JavaScript
total = sum(i for i in range(1, 11)) print("The sum of the first 10 numbers is:", total)

An ES7 feature that brings syntactic sugar and conciseness

The ** operator.

This one almost always comes up in Python’s favor when talking about language conciseness, up there with generators and the // operator.

It’s good to know JavaScript now has this feature, over 6 years ago in fact.

But it was surprising to know that a sizeable number of our fellow JavaScripters never knew it’s in the language.

It’s now effortless to get the power of a number, with the ** operator. Instead of Math.pow(a, b), you do a ** b.

JavaScript
const result = Math.pow(10, 2); console.log(result); // 100 const result2 = Math.pow(2, Math.pow(3, 2)); console.log(result2); const result3 = 10 ** 2; console.log(result3); // 100 const result4 = 2 ** 3 ** 2; console.log(result4) // 512

We don’t need a function for such a common math operation anymore.

You can even pass a decimal number as a power with **Math.pow() can do this too:

JavaScript
const result = Math.pow(49, 1.5); console.log(result); // 343 const result2 = 49 ** 1.5; console.log(result2); // 343

And it’s not only a drop-in replacement for Math.pow(); ** can take BigInts too:

JavaScript
// ❌ Error: Cannot convert a BigInt value to a number const result1 = Math.pow(32n, 2); console.log(result1);
JavaScript
const result2 = 32n ** 2n; console.log(result2); // 1024n

BigInts let us represent numbers of any size without losing precision or experiencing overflow errors.

JavaScript
const veryLargeNumber = 1234567890123456789012345678901234567890n; console.log(typeof veryLargeNumber); // "bigint" console.log(veryLargeNumber * 2n); // 2469135780246913578024691357802469135780n

You can see that we simply add an n at the end of the digits to make it a BigInt.

Final thoughts

Language wars are a fun programmer pastime.

It’s always fun to debate about which programming language is more elegant and concise.

But at the end of the day, we’ve got to keep in mind that writing readable and maintainable code is what matters most.

In this article, we saw that the ** operator introduced in ES7 for JavaScript is a neat trick that can make your code more concise, and it even works with BigInts!

More features keep getting added every year — ES13 was released in 2022 — to increase and add more syntactic sugar.

So, keep exploring the possibilities of your favorite programming language, and have fun coding!

How to Simulate a Mouse Click in JavaScript

In this article, we’ll learn multiple ways to easily simulate a mouse click or tap on an element in the HTML DOM, using JavaScript.

Use click() method

This is the easiest and most basic method to simulate a mouse click on an element. Every HTMLElement has a click() method that can be used to simulate clicks on it. It fires the click event on the element when it is called. The click event bubbles up to elements higher in the document tree and fires their click events.

JavaScript
const target = document.querySelector('#target'); target.click();

To use the click() method, you first need to select the element in JavaScript, using a method like querySelector() or getElementById(). You’ll be able to access the HTMLElement object from the method’s return value, and call the click() method on it to simulate the click.

Let’s see an example, where we simulate a button click every second, by calling click() in setInterval().

JavaScript
const target = document.querySelector('#target'); let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; // Programmatically click button every 1 second setInterval(() => { target.click(); clickCount++; clickCountEl.innerText = clickCount; }, 1000);
HTML
<button id="target">Target</button> <br /><br /> Clicks: <span id="click-count"></span>

Result

The button is clicked programmatically every 1 second.
The button is clicked programmatically every 1 second.

Notice how there are no visual indicators of a click occurring because it’s a programmatic click.

Since click() causes the click event to fire on the element, any click event listeners you attach to the element will be invoked from a click() call.

In the following example, we use a click event listener (instead of setInterval) to increase the click count, and another listener to toggle the button’s style when clicked.

JavaScript
const target = document.querySelector('#target'); let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; target.addEventListener('click', () => { clickCount++; clickCountEl.innerText = clickCount; }); target.addEventListener('click', () => { target.classList.toggle('btn-style'); }); setInterval(() => { target.click(); }, 1000);
HTML
<button id="target">Target</button> <br /><br /> Count: <span id="click-count"></span>
CSS
.btn-style { color: white; background-color: blue; border-radius: 2px; }
The click count is incremented and the button’s style is toggled from the simulated click.

Simulate mouse click with MouseEvent object

Alternatively, we can use a custom MouseEvent object to simulate a mouse click in JavaScript.

JavaScript
const targetButton = document.getElementById('target'); const clickEvent = new MouseEvent('click'); targetButton.dispatchEvent(clickEvent);

The MouseEvent interface represents events that occur from the user interacting with a pointing device like the mouse. It can represent common events like click, dblclick, mouseup, and mousedown.

After selecting the HTMLElement and creating a MouseEvent, we call the dispatchEvent() method on the element to fire the event on the element.

For example:

JavaScript
const clickEvent = new MouseEvent('click'); const targetButton = document.getElementById('target'); let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; targetButton.addEventListener('click', () => { clickCount++; clickCountEl.innerText = clickCount; }); setInterval(() => { const clickEvent = new MouseEvent('click'); targetButton.dispatchEvent(clickEvent); }, 1000);
HTML
<button id="target">Target</button> <br /><br /> Clicks: <span id="click-count"></span>
The button is clicked programmatically every 1 second.
The button is clicked programmatically every second with MouseEvent.

Any click event listener attached to the element is called with the MouseEvent object that was passed to dispatchEvent().

JavaScript
const clickEvent = new MouseEvent('click'); const targetButton = document.getElementById('target'); targetButton.addEventListener('click', (event) => { console.log(clickEvent === event); // true }); targetButton.addEventListener('click', (event) => { console.log(clickEvent === event); // true }); targetButton.dispatchEvent(clickEvent);

Result

The dispatched event object is passed to click listeners.
The dispatched MouseEvent is passed to click listeners.

Apart from the event type passed as the first argument, we can pass options to the MouseEvent() constructor to control specific details about the event:

JavaScript
const targetButton = document.getElementById('target'); const clickEvent = new MouseEvent('click', { bubbles: true, cancelable: false, view: window }); targetButton.dispatchEvent(clickEvent);

The bubbles property determines whether the event can bubble up the DOM tree to the element’s containing elements.

The cancelable property determines whether or not the event’s default action can be prevented.

The view property sets the event’s AbstractView. You should pass the window object here.

Find more options in the MDN documentation for the MouseEvent() constructor and the now deprecated MouseEvent.initMouseEvent() method.

Simulate mouse click at position

We can also simulate a mouse click on an element at a specific position on the visible part of the webpage.

We do this by selecting the element with the document.elementFromPoint() method, and then simulating the click with the click() method.

JavaScript
const targetButton = document.elementFromPoint(x, y).click(); targetButton.click();

The elementFromPoint() method returns the topmost element at the specified coordinates, relative to the browser’s viewport. It returns an HTMLElement, which we call click() on to simulate the mouse click from JavaScript.

In the following example, we have a button and a containing div:

HTML
<div id="container"> <button id="target">Target</button> <br /><br /> Count: <span id="click-count"></span> </div>

Using CSS, we position the #container div to make its top-left corner exactly at the position (200, 100) in the viewport.

CSS
#container { position: absolute; top: 100px; left: 200px; height: 100px; width: 100px; border: 1px solid black; }

The #target button is at point (0, 0) in its #container, so point (200, 100) in the viewport.

To get the position of the #target with more certainty, we add an offset of 10px each to get the final position to search for: (210, 110).

JavaScript
let clickCount = 0; const clickCountEl = document.getElementById('click-count'); clickCountEl.innerText = clickCount; const targetButton = document.getElementById('target'); targetButton.addEventListener('click', () => { clickCount++; clickCountEl.innerText = clickCount; }); // Offset of 10px to get button's position in container const x = 200 + 10; const y = 100 + 10; setInterval(() => { const buttonAtPoint = document.elementFromPoint(x, y); buttonAtPoint.click(); }, 1000);

To verify that we’re actually selected the button by its position and it’s getting clicks, we select it by its ID (target) and set a click event listener on the HTMLElement returned, which will increase the count of how many times it has been clicked.

And we’re successful:

A click is simulated on the button at point (210, 110).

Simulate mouse click at position with MouseEvent object

We can also simulate a mouse click on an element at a certain (x, y) position using a MouseEvent object and the dispatchEvent() method.

Here’s how:

JavaScript
const clickEvent = new MouseEvent('click', { view: window, screenX: x, screenY: y, }); const elementAtPoint = document.elementFromPoint(x, y); elementAtPoint.dispatchEvent(clickEvent);

This time we specify the screenX and screenY options when creating the MouseEvent.

screenX sets the position where the click event should occur on the x-axis. It sets the value of the MouseEvent‘s screenX property.

screenY sets the position where the click event should occur on the Y-axis. It sets the value of the MouseEvent‘s screenY property.

So we can use MouseEvent to replace the click() method in our last example:

JavaScript
// ... // Offset of 10px to get button's position in container const x = 200 + 10; const y = 100 + 10; setInterval(() => { const buttonAtPoint = document.elementFromPoint(x, y); const clickEvent = new MouseEvent('click', { view: window, screenX: x, screenY: y, }); buttonAtPoint.dispatchEvent(clickEvent); }, 1000);

Note: screenX and screenY doesn’t decide where the click should occur (elementFromPoint does). These options are set so that they can be accessed from any click event listener attached to the element at the point.

3 Common Mistakes to Avoid When Handling Events in React

In React apps, event listeners or observers perform certain actions when specific events occur. While it’s quite easy to create event listeners in React, there are common pitfalls you need to avoid to prevent confusing errors. These mistakes are made most often by beginners, but it’s not rare for them to be the reason for one of your debugging sessions as a reasonably experienced developer.

In this article, we’ll be exploring some of these common mistakes, and what you should do instead.

1. Accessing state variables without dealing with updates

Take a look at this simple React app. It’s essentially a basic stopwatch app, counting up indefinitely from zero.

JavaScript
import { useState, useEffect } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { const timer = setInterval(() => { setTime(time + 1); }, 1000); return () => { window.clearInterval(timer); } }, []); return ( <div> Seconds: {time} </div> ); }

However, when we run this app, the results are not what we’d expect:

The seconds is stuck at 1.
Stuck at 1

This happens because the time state variable being referred to by the setInterval() callback/closure refers to the stale state that was fresh at the time when the closure was defined.

The closure is only able to access the time variable in the first render (which had a value of 0) but can’t access the new time value in subsequent renders. JavaScript closure remembers the variables from the place where it was defined.

The issue is also due to the fact that the setInterval() closure is defined only once in the component.

The time variable from the first render will always have a value of 0, as React doesn’t mutate a state variable directly when setState is called, but instead creates a new variable containing the new state. So when the setInterval closure is called, it only ever updates the state to 1.

Here are some ways to avoid this mistake and prevent unexpected problems.

1. Pass function to setState

One way to avoid this error is by passing a callback to the state updater function (setState) instead of passing a value directly. React will ensure that the callback always receives the most recent state, avoiding the need to access state variables that might contain old state. It will set the state to the value the callback returns.

Here’s how we apply this for our example:

JavaScript
import { useState, useEffect } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { const timer = setInterval(() => { // 👇 Pass callback setTime((prevTime) => prevTime + 1); }, 1000); return () => { window.clearInterval(timer); } }, []); return ( <div> Seconds: {time} </div> ); }
The time is increased by 1 every second - success.
The time is increased by 1 every second – success.

Now the time state will be incremented by 1 every time the setInterval() callback runs, just like it’s supposed to.

2. Event listener re-registration

Another solution is to re-register the event listener with a new callback every time the state is changed, so the callback always accesses the fresh state from the enclosing scope.

We do this by passing the state variable to useEffect‘s dependencies array:

JavaScript
import { useState, useEffect } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { const timer = setInterval(() => { setTime(time + 1); }, 1000); return () => { window.clearInterval(timer); } }, [time]); return ( <div> Seconds: {time} </div> ); }

Every time the time state is changed, a new callback accessing the fresh state is registered with setInterval(). setTime() is called with the latest time state added to 1, which increments the state value.

2. Registering event handler multiple times

This is a mistake frequently made by developers new to React hooks and functional components. Without a basic understanding of the re-rendering process in React, you might try to register event listeners like this:

JavaScript
import { useState } from 'react'; export default function App() { const [time, setTime] = useState(0); setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); return ( <div> Seconds: {time} </div> ); }

Or you might put it in a useEffect hook like this:

JavaScript
import { useState } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); }); return ( <div> Seconds: {time} </div> ); }

If you do have a basic understanding of this, you should be able to already guess what this will lead to on the web page.

The seconds keep accelerating.
It eventually gets as bad as this.

What’s happening?

What’s happening is that in a functional component, code outside hooks, and outside the returned JSX markup is executed every time the component re-renders.

Here’s a basic breakdown of what happens in a timeline:

  1. 1st render: listener 1 registered
  2. 1 second after listener 1 registration: time state updated, causing another re-render)
  3. 2nd render: listener 2 registered.
  4. Listener 1 never got de-registered after the re-render, so…
  5. 1 second after last listener 1 call: state updated
  6. 3rd render: listener 3 registered.
  7. Listener 2 never got de-registered after the re-render, so…
  8. 1 second after listener 2 registration: state updated
  9. 4th render: listener 4 registered.
  10. 1 second after last listener 1 call: state updated
  11. 5th render: listener 5 registered.
  12. 1 second after last listener 2 call: state updated
  13. 6th render: listener 6 registered.
  14. Listener 3 never got de-registered after the re-render, so…
  15. 1 second after listener 3 registration: state updated.
  16. 7th render: listener 7 registered…

Eventually, things spiral out of control as hundreds and then thousands (and then millions) of callbacks are created, each running at different times within the span of a second, incrementing the time by 1.

The fix for this is already in the first example in this article – put the event listener in the useEffect hook, and make sure to pass an empty dependencies array ([]) as the second argument.

JavaScript
import { useEffect, useState } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); }, []); return ( <div> Seconds: {time} </div> ); }

useEffect runs after the first render and whenever any of the values in its dependencies array change, so passing an empty array makes it run only on the first render.

The time increases steadily, but by 2.
The time increases steadily, but by 2.

The time increases steadily now, but as you can see in the demo, it goes up by 2 seconds, instead of 1 second in our very first example. This is because in React 18 strict mode, all components mount, unmount, then mount again. so useEffect runs twice even with an empty dependencies array, creating two listeners that update the time by 1 every second.

We can fix this issue by turning off strict mode, but we’ll see a much better way to do so in the next section.

3. Not unregistering event handler on component unmount.

What happened here was a memory leak. We should have ensured that any created event listener is unregistered when the component unmounts. So when React 18 strict mode does the compulsory unmounting of the component, the first interval listener is unregistered before the second listener is registered when the component mounts again. Only the second listener will be left and the time will be updated correctly every second – by 1.

You can perform an action when the component unmounts by placing in the function useEffect optionally returns. So we use clearInterval to unregister the interval listener there.

JavaScript
import { useEffect, useState } from 'react'; export default function App() { const [time, setTime] = useState(0); useEffect(() => { console.log('here'); const timer = setInterval(() => { setTime((prevTime) => prevTime + 1); }, 1000); // 👇 Unregister interval listener return () => { clearInterval(timer); } }, []); return ( <div> Seconds: {time} </div> ); }

useEffect‘s cleanup function runs after every re-render, not only when the component unmounts. This prevents memory leaks that happen when an observable prop changes value without the observers in the component unsubscribing from the previous observable value.

Conclusion

Creating event listeners in React is pretty straightforward, you just need to be aware of these caveats, so you avoid unexpected errors and frustrating debugging spells. Avoid accessing stale state variables, don’t register more event listeners than required, and always unregister the event listener when the component unmounts.

Stop autosaving your code

Autosave has grown in popularity recently and become the default for many developers and teams, a must-have feature for various code editors. Apps like Visual Studio stubbornly refuse to fully provide the feature, and others make it optional. WebStorm, PHPStorm, and other JetBrains products have it enabled by default; for VSCode, you have to turn it on if you want it.

So obviously, we have two opposing views on the value of autosave because even though it can be highly beneficial, it has its downsides. In this article, we’ll look at both sides of the autosave divide, good causes for turning it off and good causes not to.

Why you should stop autosaving your code

First, some reasons to think twice before enabling autosave in your code editor:

1. Higher and wasted resource usage

High VSCode CPU usage

When using tools that perform an expensive action any time the file is changed and saved, like build watchers, continuous testing tools, FTP client file syncers, etc, turning on autosave will make these actions much more often. They will also happen when there are errors in the file, and when you make a tiny change. It might instead be preferable for these tools to run only when they need to; when you reach a point where you really want to see the results of your changes.

With greater CPU and memory usage comes lower battery usage and more heat from higher CPU temperature. Admittedly, this will continue to become less and less of an issue as computers increase in processing power, memory capacity, and battery life across the board. But depending on your particular situation, you might want to conserve these things as much as possible.

2. Harder to recover from unexpected errors

Error output in the console.

With autosave enabled, any single change you make to your code file is written to disk, whether these changes leave your file in a valid state or not. This makes it harder to recover from unwanted changes.

What if you make an unintended and possibly buggy change, maybe from temporarily trying something out, and then close the file accidentally or unknowingly (autosave makes this more likely to happen)? With your Undo history wiped out, it will be harder to recover the previous working version of the file. You might even forget how the code used to look before the change, and then have to expend some mental effort to take the code back to what it was.

Git logo.

Of course, using version control tools like Git and Mercurial significantly decrease the chances of this happening. Still, the previous working version of the file you would want to recover could be one with uncommitted changes, not available from version control, especially if you don’t commit very frequently or you have a commit scheduling determined by more than just the code working after small changes, e.g., committing when a mini milestone is reached, committing after every successful build, etc.

So if you want to continue enjoying the benefits of auto-save while minimizing the possibility of this issue occurring, it’s best if you always use source control and have a frequent commit schedule.

3. No auto-formatting on save

VSCode "Format on Save" option

Many IDEs and text editors have a feature that automatically formats your code, so you can focus on the task at hand. For example, VSCode has built-in auto-formatting functionality, and also allows extensions to be written to provide more advanced or opinionated auto-formatters for various languages and file extensions.

These editors typically provide an option to format the file when it is saved. For manual saving, this makes sense, as usually you Ctrl/Cmd + S after making a small working change to a file and stop typing. This seems like a great point for formatting, so it’s a great idea to combine it with the saving action so there’s no need to think about it.

Prettier's format-on-save feature.

However, this feature isn’t very compatible with auto-save, and that’s why editors/IDEs like WebStorm and VSCode do not format your code for you on auto-save (you can still press Ctrl (Cmd) + S for it to happen, but isn’t one of the reasons for enabling auto-save to avoid this over-used keyboard shortcut?).

For one, it would probably be annoying for the cursor to change position due to auto-formatting as you’re typing. And then, there’s also the thing we already talked about earlier – the file won’t always be syntactically valid after an auto-save, and the auto-formatter will fail.

There is one way though, to have auto-formatting while still leaving auto save turned on, and that is enabling auto-formatting on commit. You can do this using Git pre-commit hooks provided by tools like Prettier and Husky.

Still only happens on commit though, so unless your code is not too messed up or you’re ready to format manually, you’ll have to endure the disorderliness until your next commit (or just press that Ctrl + S).

4. Can be distracting

If you have a tool in your project that performs an action when files are saved and indicate this visually in your editor, i.e, a pop-up notification to indicate recompilation, output in the terminal to indicate rebuilding, etc. With auto-save turned on, it can be a bit distracting for these actions to occur whenever you stop typing for a little while.

For instance, in this demo, notice how the terminal output in VSCode changes wildly from typing in a small bunch of characters:

The terminal output changes wildly from typing in a small bunch of characters.

Text editors have tried to fix this problem (and the resource usage problem too) by adding autosave delays; waiting a certain period of time since the file was last changed before actually committing the changes to disk.

This reduces the frequency at which the save-triggering actions occur and solves the issue to an extent, but it’s a trade-off as lack of immediate saving produces another non-ideal situation.

5. Auto-save is not immediate

The auto-save doesn't happen immediately.

Having an auto-save delay means that your code file will not be saved immediately. This can lead to some problems:

Data loss

Probably the biggest motivator for enabling auto-save is to reduce the likelihood that you’ll lose all the hard work you’ve put into creating code should an unexpected event like a system crash or the forced closing of the application occur. The higher your auto-save delay, the greater the chance of this data loss happening.

VSCode takes this into account; when its auto-save delay is set to 2 or more seconds, it will show the unsaved file indicator for a recently modified file, and the unsaved changes warning dialog if you try to close the file until the delay completes.

On-save action lags

Tools that run on save like build watchers will be held back by the auto-save delay. With manual save, you know that hitting Ctrl + S will make the watcher re-build immediately, but with delayed auto-save, you’ll have to experience the lag between your finishing and the watcher reacting to changes. This could impact the responsiveness of your workflow.

Why you should autosave your code

The reasons above probably won’t be enough to convince many devs to disable autosave. It is a fantastic feature after all. And now let’s look at some of the reasons why it’s so great to have:

1. No more Ctrl + S fatigue

Comic on Ctrl + S fatigue.
Image source: CommitStrip

If you use manual save, you probably press this keyboard shortcut hundreds or even thousands of times in a working day. Auto-saving helps you avoid this entirely. Even if you’re very used to it now, once you get used to your files being autosaved, you’ll be hesitant to back to the days of carrying out the ever-present chore of Ctrl + S.

Eradicating the need for Ctrl + S might even lower your chances of suffering from repetitive strain injury, as you no longer have to move your wrists and fingers over and over to type the key combination.

2. Save time and increase productivity

Save time photo.
Save icons created by Kiranshastry – Flaticon

The time you spend pressing the key combination to save a file might not seem like much, but it does add up over time. Turning auto-save on lets you use this time for more productive activities. Of course, if you just switched to auto-save, you’ll have to work on unlearning your Ctrl + S reflex for this to be a benefit to you.

3. Certainty of working with latest changes

Any good automation turns a chore into a background operation you no longer have to think about. This is what auto-save does to saving files; no longer are you unsure of whether you’re working with the most recent version of the file. Build watchers and other on-file-change tools automatically run after the file’s contents are modified, and display output associated with the latest file version.

4. Avoids errors due to file not being saved

Error output in the console.

This follows from the previous point. Debugging can be a tedious process and it’s not uncommon for developers to forget to save a file when tirelessly hunting for bugs. You probably don’t want to experience the frustration of scrutinizing your code, line after line, wondering how this particular bug can still exist after everything you’ve done.

You might think I’m exaggerating, but it might take up to 15 (20? 30??) minutes before you finally notice the unsaved file indicator. Especially if you’ve been trapped in a cycle of making small changes, saving, seeing the futility of your changes, making more small changes, saving… when you’re finally successful and pressing Ctrl + S is the only issue, you might just assume your change didn’t work, instead of checking for other possible reasons for the reoccurrence of the error.

5. Encourages smaller changes due to triggering errors faster

When a tool performs an action due to a file being saved, the new contents of the file might be invalid and trigger an error. For example, a test case might fail when a continuous testing tool re-runs or there might be a syntax error when a build watcher re-builds.

Since this type of on-file-change action occur more (possibly much more) when files are auto-saved when you type code that causes an error, it will take a shorter time for the action to happen and for you to be notified of the error. You would have made a smaller amount of code changes, which will make it easier to identify the source of the error.

Conclusion

Autosave is an amazing feature with the potential to significantly improve your quality of life as a developer when used properly. Still, it’s not without its disadvantages, and as we saw in this article, enabling or disabling it is a trade-off to live with. Choose auto-format on save and lower CPU usage, or choose to banish Ctrl + S forever and gain the certainty of working with up-to-date files.

What are your views concerning the autosave debate? Please let me know in the comments!