Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

New JavaScript pipeline operator: transform anything into a one-liner

With the pipeline operator you’ll stop writing code like this:

JavaScript
const names = ['USA', 'Australia', 'CodingBeauty']; const lowerCasedNames = names.map((name) => name.toLowerCase() ); const hyphenJoinedNames = lowerCasedNames.join('-'); const hyphenJoinedNamesCapitalized = capitalize( hyphenJoinedNames ); const prefixedNames = `Names: ${hyphenJoinedNamesCapitalized}`; console.log(prefixedNames); // Names: Usa-australia-codingbeauty

and start writing code like this:

JavaScript
// Hack pipes: | and > ['USA', 'Australia', 'CodingBeauty'] |> %.map((name) => name.toLowerCase()) |> %.join('-') |> capitalize(%) |> `Names: ${%}` |> console.log(%); // Names: Usa-australia-codingbeauty

So refreshingly clean — and elegant! All those temporary variables are gone — not to mention the time it took to come up with those names *and* type them (not everyone types like The Flash, unfortunately).

You may have heard this partially true quote attributed to Phil Karlton: “There are only two hard things in computer science: cache invalidation and naming things“.

Using the JavaScript pipeline operator clears out the clutter to boost readability and write data transforming code (basically all code) in a more intuitive manner.

Verbosity should be avoided as much as possible, and this works so much better to compact code than reusing short-named variables:

JavaScript
let buffer = await sharp('coding-beauty-v1.jpg').toBuffer(); let image = await jimp.read(buffer); image.grayscale(); buffer = await image.getBufferAsync('image/png'); buffer = await sharp(buffer).resize(250, 250).toBuffer(); image = await jimp.read(buffer); image.sepia(); buffer = await image.getBufferAsync('image/png'); await sharp(buffer).toFile('coding-beauty-v1.png');

Hopefully, almost no one codes like this on a regular basis. It’s a pretty horrible technique when done in a large scale; a perfect example showing why we embrace immutability and type systems.

Unlike the pipeline operator, there’s no certainty that the variable always contains the value you set at any given point; you’ll need to climb up the scope to look for re-assignments. We can have used the _ at an earlier point in the code; the value it has at various points in the code is simply not guaranteed.

Now we’re just using an underscore, so without checking out the right-hand side of those re-assignments you can’t quickly know what the type of the variable is, unless you have a smart editor like VS Code (although I guess you could say that doesn’t matter since they’re supposed to be “temporary” — at least until they’re not!).

All in all, poor readability. Fragile and Unstable. 5 times harder for someone new to understand. Also, some would say underscores are “ugly”, especially in languages like JavaScript where they hardly show up.

JavaScript
// setup function one() { return 1; } function double(x) { return x * 2; } let _; _ = one(); // is now 1. _ = double(_); // is now 2. Promise.resolve().then(() => { // This does *not* print 2! // It prints 1, because '_' is reassigned downstream. console.log(_); }); // _ becomes 1 before the promise callback. _ = one(_);

Okay, so why don’t we just avoid this infestation of temporary underscores, and nest them into one gigantic one-liner?

JavaScript
await sharp( jimp .read( await sharp( await jimp .read( await sharp('coding-beauty-v1.jpg').toBuffer() ) .grayscale() .getBufferAsync('image/png') ) .resize(250, 250) .toBuffer() ) .sepia() .getBufferAsync('image/png') ).toFile('coding-beauty-v2.png');

It’s a mess. The underscore is gone, but who in the world can understand this at a glance? How easy is it to tell how the data flows throughout this code, and make any necessary adjustments.

Understanding, at a glance — this is what we should strive for with every line of code we write.

The pipeline operator greatly outshines every other method, giving us both freedom from temporary variables and readability. It was designed for this.

JavaScript
// We don't need to wrap 'await' anymore await sharp('coding-beauty-v1.jpg').toBuffer() |> jimp.read(%) |> %.grayscale() |> await %.getBufferAsync('image/png') |> await sharp(%).resize(250, 250).toBuffer() |> await jimp.read(%) |> %.sepia() |> await %.getBufferAsync('image/png') |> await sharp(%).toFile('coding-beauty-v2.png');

Here the % only exists within this particular pipeline.

Method chaining?

Who hasn’t used and combined heavily popular array methods like map, filter, and sort? Very hard to avoid in applications involving any form of list manipulation.

JavaScript
const numbers = '4,2,1,3,5'; const result = numbers .split(',') .map(Number) .filter((num) => num % 2 === 0) .map((num) => num * 2) .sort(); // [4, 8]

This is actually great. There aren’t any temporary variables or unreadable nesting here either and we can easily follow the chain from start to finish.

The formatting lets us easily add more methods at any point in the chain; feature-packed editor like VS Code can easily swap the processing order of two methods, with the Ctrl + Up and Ctrl + Down shortcuts.

There’s a reason why libraries like core http and jQuery are designed like this:

JavaScript
const http = require('http'); http .createServer((req, res) => { console.log('Welcome to Coding Beauty'); }) .on('error', () => { console.log('Oh no!'); }) .on('close', () => { console.log('Uuhhm... bye!'); }) .listen(3000, () => { console.log('Find me on port 3000'); });

The problem with method chaining is that we can’t use it everywhere. If the class wasn’t designed like that we’re stuck and out in the cold.

It doesn’t work very well with generator methods, async/await and function/method calls outside the object, like we saw here:

JavaScript
await sharp( // 3-method chain, but not good enough! jimp .read( await sharp( // Same here await jimp .read( await sharp('coding-beauty-v1.jpg').toBuffer() ) .grayscale() .getBufferAsync('image/png') ) .resize(250, 250) .toBuffer() ) .sepia() .getBufferAsync('image/png') ).toFile('coding-beauty-v2.png');

But all this and more work with the pipeline operator; even object literals and async import function.

JavaScript
await sharp('coding-beauty-v1.jpg').toBuffer() |> jimp.read(%) |> %.grayscale() |> await %.getBufferAsync('image/png') |> await sharp(%).resize(250, 250).toBuffer() |> await jimp.read(%) |> %.sepia() |> await %.getBufferAsync('image/png') |> await sharp(%).toFile('coding-beauty-v2.png');

Could have been F# pipes

We would have been using the pipeline operator very similarly to F# pipes, with the above turning out like this instead:

JavaScript
(await sharp('coding-beauty-v1.jpg').toBuffer()) |> (x) => jimp.read(x) |> (x) => x.grayscale() |> (x) => x.getBufferAsync('image/png') |> await |> (x) => sharp(x).resize(250, 250).toBuffer() |> await |> (x) => jimp.read(x) |> await |> (x) => x.sepia() |> (x) => x.getBufferAsync('image/png') |> await |> (x) => sharp(x).toFile('coding-beauty-v2.png'); |> await

There was an alternative design. But you can already see how this makes for an inferior alternative: Only single-function arguments are allowed and the operation is more verbose. Unless the operation is already a single-argument function call.

It’s weird handling of async/await was also a key reason why it got rejected — along with memory usage concerns. So, forget about F# pipes in JS!

Use the pipeline operator right now

Yes you can — with Babel.

Babel has a nice habit of implementing features before they’re officially integrated in the language; it did this for top-level await, optional chaining, and many others. The pipeline operator couldn’t be an exception.

Just use the @babel/plugin-proposal-pipeline-operator plugin and you’re good to.

It’s optional of course — but not for long.

Prettier the code formatter is already prepared.

Even though we can’t say the same about VS Code or Node.js.

Right now there’s even speculation that % won’t be the final symbol pass around in the pipeline; let’s watch and see how it all plays out.

Final thoughts

It’s always great to see new and exciting features come to the language. With the JavaScript pipeline operator, you’ll cleanse your code of temporary variables and cryptic nesting, greatly boost code readability, efficiency, and quality.

The new M4 Mac mini is a MONSTER

They call it mini but what it can do is far from mini.

Only 5 x 5 x 2 inches and 1.5 pounds That’s mega-light.

Yet the M4 chip makes it as dangerous as the new MacBook Pro — even though it costs much less.

Image source: verge.com

And just look at the ports:

Image source: apple.com

And you know I saw this pic on their website and was like, What the hell is this?

Then I saw this:

Image source: apple.com

Ohhh… it’s a CPU — no a system unit…

It’s a “pure” computer with zero peripherals — not even a battery. You’re buying everything yourself.

Definitely dramatically superior to the gigantic system unit I used when I was younger.

But I didn’t think this was still a huge thing. Especially with integrated screens like the iMac.

Mac Mini is like the complete opposite of the iMac — a gigantic beast that comes with everything…

Image source: apple.com

iMac gives you predictability — no analysis paralysis in getting all your parts (although you can just buy apple anyways)

Image source: apple.com

Mac Mini is jam-packed with ports:

On the front we’ve got two 10 Gbps USB-C ports and a headphone jack:

Image source: apple.com

Back ports:

Lovely crisp icons indicate what they’re each for…

Image source: apple.com

But they put the power button at the bottom — dumb move!

You’ll have to raise it up any time you want to on it.

Wouldn’t it have been cool if instead they made the power huge to cover the bottom completely — so you’d just have to push it down like those red buttons in game shows?

But once it’s all powered up the possibilities are endless:

From basic typing to heavyweight gaming — like Apple Arcade stuff:

Image source: apple.com

And coding of course:

Image source: apple.com

And with an improved thermal system, Mac Mini can handle all these demanding tasks quietly:

Image source: apple.com

The base plan starts at $599 for 16GB RAM and 256 GB SSD with M4 Pro, you can pay for higher configs like other Mac devices allow:

  • 16 GB RAM and 256 GB SSD – $799
  • 24 GB RAM and 512 GB SSD – $999

And then there’s the M4 Pro — 24 GB RAM and 512 GB SSD for $1399.

Overall the M4 Mac Mini is a perfect blend of power, compact design, and value, great for professionals looking for the ideal desktop workstation.

This amazing service saves you from wasting money on a new PC

Shadow PC saves you from wasting thousands of dollars on a new PC.

A fully customizable computer in the cloud with amazing capabilities.

Built to handle heavyweight work: from hardcore gaming to video editing to game dev.

A Windows you can take anywhere you go. Install whenever you want — if it runs on Windows, it runs on Shadow.

❌ Before:

You spend hours searching for the perfect PC to buy with specs that meet your needs and also stays within budget.

You empty your wallet and waste more time ordering it online or checking out your nearby stores.

Then you waste more money on data to download everything you need to finally get started.

✅ Now:

Join Shadow and get cloud PC instantly.

Install everything with lightning-fast Internet speeds of over 1 Gbps:

Done.

And this Internet has nothing to do with your data plan — You only need data to stream the screen to yours — all the uploads and downloads are done on the remote PC with zero cost to you.

Lightweight and straightforward — open the Shadow app and you get to the desktop in less than a minute.

Turn it off and come back whenever to pick right where you left off.

Play hardcore CPU-intensive games without making a dent in your system resources or storage space. Your PC fan will be super silent and your CPU will be positively bored out of its mind with idleness.

Make it full-screen and enjoy a seamless, immersive experience.

When I was using it on a Windows PC there were times when I didn’t even know which was which. Cause it’s literally just Windows — no curated interface like in some gaming services.

It’s also got apps for Mac, Android, iOS, and Linux.

Including a convenient browser-based mode for quick and easy access:

Cost?

So there are two pricing tiers — Shadow Gaming for gaming and Shadow Pro for professional work like video editing.

For just $10 a month get a powerful 3.1GHz processor with 6 GB of RAM and a generous 5 TB of HDD storage, AND 256 GB SSD storage!

Easily capable of Fortnite, Minecraft, and many other popular games.

You also get the 1 Gb/s download bandwidth guaranteed.

Upgrading to the Boost plan will get you an additional 6 GB RAM and a 256 GB SSD for $30 a month.

And then there’s the most powerful Power plan for even more… POWER.

Shadow Pro’s pricing is a bit different.

The plan names are typical and boring, the starting plan is cheaper. I went with Standard and it was great.

This is amazing! How do I get started?

Just head over to shadow.tech and create an account:

After subscribing to a plan they’ll start setting up your cloud PC right away. Looks like they do it manually so it’ll take anywhere from 30-60 minutes to complete.

The email they sent me:

Install the app and sign in.

START NOW and start enjoying your personal computer in the cloud.

Final thoughts

New Gemini 1.5 FLASH model: An absolute Google game changer

So Google has finally decided to show OpenAI who the real king of AI is.

Their new Gemini 1.5 Flash model blows GPT-4o out of the water and the capabilities are hard to believe.

Lightning fast.

33 times cheaper than GPT-4o but has a 700% greater context — 1 million tokens.

What is 1 million tokens in the real-world? Approximately:

  • Over an 1 hour of video
  • Over 30,000 lines of code
  • Over 700,000 words

❌GPT-4o cost:

  • Input: $2.50 per million tokens
  • Output: $10 per million tokens
  • Cached input: $1.25 per million tokens

✅ Gemini 1.5 Flash cost:

  • Input: $0.075 per million tokens
  • Output: $0.30 per million tokens
  • Cached input: $0.01875 per million tokens

And then there’s the mini Flash-8B version for cost-efficient tasks — 66 times cheaper:

And the best part is the multi-modality — it can reason with text, files, images and audio in complex integrated ways.

And 1.5 Flash has almost all the capabilities of Pro but much faster. And as a dev you can start using them now.

Gemini 1.5 Pro was tested with a 44-minute silent movie and astonishingly, it easily analyzed the movie into various plot points and events. Even pointing out tiny details that most of us would miss on first watch.

Meanwhile the GPT-4o API only lets you work with text and images.

You can easily create, test and refine prompts in Google’s AI Studio — completely free.

It doesn’t count in your billing like in OpenAI playground.

Just look at the power of Google AI Studio — creating a food recipe based on an image:

I uploaded this delicious bread from gettyimages:

Now:

What if I want the response to be a specialized format for my API or something?

Then you can just turn on JSON mode and specify the response schema:

OpenAI playground has this too, but it’s not as intuitive to work with.

Another upgrade Gemini has over OpenAI is how creativity it can be.

In Gemini you can increase the temperature from 0 to 200% to control how random and creative the responses are:

Meanwhile in OpenAI if you try going far beyond 100%, you’ll most likely get a whole literal load of nonsense.

And here’s the best part — when you’re done creating your prompt you can just use Get code — easily copy and paste the boilerplate API code and move lightning-fast in your development.

Works in several languages including Kotlin, Swift and Dart — efficient AI workflow in mobile dev.

In OpenAI playground you can get the code for Python and JavaScript.

Final thoughts

Gemini 1.5 Flash is a game-changer offering unparalleled capabilities at a fraction of the cost.

With its advanced multi-modality ease of use, generous free pricing, and creative potential it sets a new standard for AI leaving GPT-4o in the dust.

Svelte 5 is React on steroids😲

Svelte 5 just got released and it’s packed with amazing new upgrades.

It’s like React now but leaner, easier, and yes — faster.

Just look at this:

❌ Before Svelte 5:

Creating state:

You could just create state variables with let:

HTML
<script> let count = 0; let site = 'codingbeautydev.com' </script>

✅ Now with Svelte 5…

useState😱

HTML
<script> let count = $state(0); let site = $state('codingbeautydev.com'); </script>

vs React:

JavaScript
export function Component() { const [count, setCount] = useState(0); const [site, setSite] = useState('codingbeautydev.com'); // ... }

But see how the Svelte version is still much less verbose — and no need for a component name.

Watching for changes in state?

In React this is useEffect:

JavaScript
export function Component() { const [count, setState] = useState(0); useEffect(() => { if (count > 9) { alert('Double digits?!'); } }, [count]); const double = count * 2; // ... }

❌ Before Svelte 5:

You had to use a cryptic unnatural $: syntax to watch for changes — and to create derived state too.

HTML
<script> let count = $state(0); $:() => { if (count > 9) { alert('Double digits?!'); } }; $: double = count * 2; </script>

✅ Now:

We have useEffect in Svelte with $effect 👇

And a more straightforward way to create auto-updated derived state:

HTML
<script> let count = $state(0); $effect(() => { if (count > 9) { alert('Double digits?!'); } }); const double = $derived(count * 2); </script>

Svelte intelligently figures out dependencies to watch for, unlike React.

And what about handling events and updating the state?

In React:

JavaScript
export function Component() { // 👇 `setState` function from `useState` const [count, setCount] = useState(0); return ( // event handlers are good old JS functions <button onClick={() => setCount((prev) => prev + 1)}> Increase </button> ); }

❌ Before:

Svelte used to treat events specially and differently from props.

HTML
<script> let count = $state(0); </script> Count: {count} <br /> <!-- 👇 special on: directive for events --> <button on:click={() => count++}> Increase </button>

✅ Now:

Svelte is now following React’s style of treating events just like properties.

HTML
<script> let count = $state(0); </script> Count: {count} <br /> <!-- 👇onclick is just a regular JS function now --> <button onclick={() => count++}> Increase </button>

Custom component props

In React:

Props are a regular JS object the component receives:

JavaScript
// 👇 `props` is object export function Person(props) { const { firstName, lastName } = props; return ( <h1> {firstName} {lastName} </h1> ); }

❌ Before Svelte 5:

You had to use a weird export let approach to expose component properties:

HTML
<script> export let firstName; export let lastName; </script> <h1> {firstName} {lastName} </h1>

Now in Svelte 5:

There’s a $props function that returns an object like in React!

HTML
<script> const { firstName, lastName } = $props(); </script> <h1> {firstName} {lastName} </h1>

Custom component events

In React:

Events are just props so they just need to accept a callback and call it whenever:

JavaScript
import React, { useState } from 'react'; const Counter = ({ onIncrease }) => { const [increase, setIncrease] = useState(0); const handleIncrease = () => { onIncrease(increase); }; return ( <div> Increase: {count} <button onClick={handleIncrease}>Increase</button> </div> ); }; export default Counter;

❌ Before Svelte 5:

You’d have to use this complex createEventDispatcher approach:

HTML
<script> import { createEventDispatcher } from 'svelte'; let increase = $state(0); const dispatchEvent = createEventDispatcher(); </script> <button on:click={() => { dispatchEvent('increase', increase); }} > Increase </button>

✅ Now:

Events are now props like in React:

HTML
<script> import { createEventDispatcher } from 'svelte'; let increase = $state(0); const { onIncrease } = $props(); </script> <button onclick={() => onIncrease(count) } > Increase </button>

Components: classes -> functions

Yes, Svelte has gone the way of React here too.

Remember when we still had to extend Component and use render() to create a class component?

JavaScript
import React, { Component } from 'react'; class Counter extends Component { constructor(props) { super(props); this.state = { count: 0 }; } incrementCount = () => { this.setState((prevState) => ({ count: prevState.count + 1 })); }; render() { return ( <div> <h1>Counter: {this.state.count}</h1> <button onClick={this.incrementCount}>Increment</button> </div> ); } } export default Counter;

And then hooks came along to let us have much simpler function components?

Svelte has now done something similar, by making components classes instead of functions by default.

In practice this won’t change much of how you write Svelte code — we never created the classes directly anyway — but it does tweak the app mounting code a little:

JavaScript
import { mount } from 'svelte'; import App from './App.svelte' // ❌ Before const app = new App({ target: document.getElementById("app") }); // ✅ After const app = mount(App, { target: document.getElementById("app") }); export default app;

Final thoughts

It’s great to see Svelte improve with inspiration from other frameworks.

Gaining the intuitiveness of the React-style design while staying lean and fast.

Web dev keeps moving.

Next.js 15 is an absolute game changer😲

Next.js 15 is officially here and things are better than ever!

From a brand new compiler to 700x faster build times, it’s never been easier to create full-stack web apps with exceptional performance.

Let’s explore the latest features from v15:

1. create-next-app upgrades: cleaner UI, 700x faster build

Reformed design

❌From this:

✅To this:

Webpack Turbopack

Turbopack: The fastest module bundler in the world (or so they say):

  • 700x faster than Webpack
  • 10x faster than Vite

And now with v15, adding it to your Next.js project is easier than ever before:

2. TypeScript configs — next.config.ts

With Next.js 15 you can finally create config files in TypeScript directly:

JavaScript
import type { NextConfig } from 'next'; const nextConfig: NextConfig = { /* config options here */ }; export default nextConfig;

The NextConfig type enables editor intellisense for every possible option.

3. React Compiler, React 19 support, and user-friendly errors

React Compiler is a React compiler (who would’ve thought).

A modern compiler that understands your React code at a deep level.

Bringing optimizations like automatic memoization — destroying the need for useMemo and useCallback in the vast majority of cases.

Saving time, preventing errors, speeding things up.

And it’s really easy to set up: You just install babel-plugin-react-compiler:

JavaScript
npm install babel-plugin-react-compiler

And add this to next.config.js

JavaScript
const nextConfig = { experimental: { reactCompiler: true, }, }; module.exports = nextConfig;

React 19 support

Bringing upgrades like client and server Actions.

Better hydration errors

Dev quality of life means a lot and error message usefulness plays a big part in that.

Next.js 15 sets the bar higher: now making intelligent suggestions on possible ways to fix the error.

Before v15:

Now:

You know I’ve had a tough time in the past from these hydration errors, so this will certainly be an invaluable one for me.

4. New caching behavior

No more automatic caching!

For all:

  • fetch() requests
  • Route handlers: GET, POST, etc.
  • <Link> client-side navigation.

But if you still want to cache fetch():

JavaScript
// `cache` was `no-store` by default before v15 fetch('https://example.com', { cache: 'force-cache' });

Then you can cache the others with some next.config.js options.

5. Partial Prerendering (PPR)

PPR combines static and dynamic rendering in the same page.

Drastically improving performance by loading static HTML instantly and streaming the dynamic parts in the same HTTP request.

JavaScript
import { Suspense } from 'react'; import { StaticComponent, DynamicComponent, } from '@/app/ui'; export const experimental_ppr = true; export default function Page() { return ( <> <StaticComponent /> <Suspense fallback={...}> <DynamicComponent /> </Suspense> </> ); }

All you need is this in next.config.js:

JavaScript
const nextConfig = { experimental: { ppr: 'incremental', }, }; module.exports = nextConfig;

6. after

Next.js 15 gives you a clean way to separate essential from non-essential tasks from every server request:

  • Essential: Auth checks, DB updates, etc.
  • Non-essential: Logging, analytics, etc.
JavaScript
import { unstable_after as after } from 'next/server'; import { log } from '@/app/utils'; export default function Layout({ children }) { // Secondary task after(() => { log(); }); // Primary tasks // fetch() from DB return <>{children}</>; }

Start using it now with experimental.after:

JavaScript
const nextConfig = { experimental: { after: true, }, }; module.exports = nextConfig;

These are just 5 of all the impactful new features from Next.js 15.

Get it now with npx create-next-app@rc and start enjoying radically improved build times and superior developer quality of life.

This new React library will make you dump Redux forever

The new Zustand library changes everything for state management in web dev.

The simplicity completely blows Redux away. It’s like Assembly vs Python.

Forget action types, dispatch, Providers and all that verbose garbage.

Just use a hook! 👇

JavaScript
import { create } from 'zustand'; const useStore = create((set) => ({ count: 0, increment: () => set((state) => ({ count: state.count + 1 })), }));

Effortless and intuitive with all the benefits of Redux and Flux — immutability, data-UI decoupling…

With none of the boilerplate — it’s just an object.

Redux had to be patched with hook support but Zustand was built from the ground up with hooks in mind.

JavaScript
function App() { const store = useStore(); return ( <div> <div>Count: {store.count}</div> <button onClick={store.increment}>Increment</button> </div> ); } export default App;

Share the store across multiple components and select only what you want:

JavaScript
function Counter() { // ✅ Only `count` const count = useStore((state) => state.count); return <div>Count: {count}</div>; } function Controls() { // ✅ Only `increment` const increment = useStore((state) => state.increment); return <button onClick={increment}>Increment</button>; }

Create multiple stores to decentralize data and scale intuitively.

Let’s be real, that single-state stuff doesn’t always make sense. And it defies encapsulation.

It’s often more natural to let a branch of components have their localized global state.

JavaScript
// ✅ More global store to handle the count data const useStore = create((set) => ({ count: 0, increment: () => set((state) => ({ count: state.count + 1 })), })); // ✅ More local store to handle user input logic const useControlStore = create((set) => ({ input: '', setInput: () => set((state) => ({ input: state.input })), })); function Controls() { return ( <div> <CountInput /> <Button /> </div> ); } function Button() { const increment = useStore((state) => state.increment); const input = useControlStore((state) => state.input); return ( <button onClick={() => increment(Number(input))}> Increment by {input} </button> ); } function CountInput() { const input = useControlStore((state) => state.input); return <input value={input} />; }

Meet useShallow(), a powerful way to get derived states — instantly updates when any of original states change.

JavaScript
import { create } from 'zustand'; import { useShallow } from 'zustand/react/shallow'; const useLibraryStore = create((set) => ({ fiction: 0, nonFiction: 0, borrowedBooks: {}, // ... })); // ✅ Object pick const { fiction, nonFiction } = useLibraryStore( useShallow((state) => ({ fiction: state.fiction, nonFiction: state.nonFiction, })) ); // ✅ Array pick const [fiction, nonFiction] = useLibraryStore( useShallow((state) => [state.fiction, state.nonFiction]) ); // ✅ Mapped picks const borrowedBooks = useLibraryStore( useShallow((state) => Object.keys(state.borrowedBooks)) );

And what if you don’t want instant updates — only at certain times?

It’s easier than ever — just pass a second argument to your store hook.

JavaScript
const user = useUserStore( (state) => state.user, (oldUser, newUser) => compare(oldUser.id, newUser.id) );

And how about derived updates based on previous states, like in React’s useState?

Don’t worry! In Zustand states update partially by default:

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blue💙', }, premium: false, // `user` object is not affected // `state` is the curr state before the update unsubscribe: () => set((state) => ({ premium: false })), }));

It only works at the first level though — you have to handle deeper partial updates by yourself:

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blue💙', }, premium: false, updateUsername: (username) => // 👇 deep updates necessary to retain other object properties set((state) => ({ user: { ...state.user, username } })), }));

If you don’t want it just pass the object directly with true as the two arguments.

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blue💙', }, premium: false, // Clear data with `true` resetAccount: () => set({}, true), }));

Zustand even has built-in support for async actions — no need for Redux Thunk or any external library.

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blue💙', }, premium: false, // ✅ async actions updateFavColor: async (color) => { await fetch('https://api.tariibaba.com', { method: 'PUT', body: color, }); set((state) => ({ user: { ...state.user, color } })); }, }));

It’s also easy to get state within actions, thanks to get — the 2nd param in create()‘s callback:

JavaScript
// ✅ `get` lets us use state directly in actions const useStore = create((set, get) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blue💙', }, messages: [], sendMessage: ({ message, to }) => { const newMessage = { message, to, // ✅ `get` gives us `user` object from: get().user.username, }; set((state) => ({ messages: [...state.messages, newMessage], })); }, }));

It’s all about hooks in Zustand, but if you want you can read and subscribe to values in state directly.

JavaScript
// Get a non-observed state with getState() const count = useStore.getState().count; useStore.subscribe((state) => { console.log(`new value: ${state.count}`); });

This makes it great for cases where the property changes a lot but you only need the latest value for intermediate logic, not direct UI:

JavaScript
export default function App() { const widthRef = useRef(useStore.getState().windowWidth); useEffect(() => { useStore.subscribe((state) => { widthRef.current = state.windowWidth; }); }, []); useEffect(() => { setInterval(() => { console.log(`Width is now: ${widthRef.current}`); }, 1000); }, []); // ... }

Zustand outshines Redux and Mobx and all the others in almost every way. Use it for your next project and you won’t regret it.

The secret code Google uses to monitor everything you do online

Google now has at least 3 ways to track your search clicks and visits that they hide from you.

Have you ever tried to copy a URL directly from Google Search?

When I did that a few months ago, I unexpectedly got something like this from my clipboard.

Plain text
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjUmK2Tk-eCAxXtV0EAHX3jCyoQFnoECAkQAQ&url=https%3A%2F%2Fcodingbeautydev.com%2Fblog%2Fvscode-tips-tricks&usg=AOvVaw0xw4tT2wWNUxkHWf90XadI&opi=89978449

I curiously visited the page and guess what? It took me straight to the original URL.

This cryptic URL turned out to be a middleman that would redirect you to the actual page.

But what for?

After some investigation, I discovered that this was how Google Search had been recording our clicks and tracking every single visited page.

They set custom data- attributes and a mousedown event on each link in the search results page:

HTML
<a jsname="UWckNb" href="https://codingbeautydev.com/blog/vscode-tips-tricks" data-jsarwt="1" data-usg="AOvVaw0xw4tT2wWNUxkHWf90XadI" data-ved="2ahUKEwjUmK2Tk-eCAxXtV0EAHX3jCyoQFnoECAkQAQ"> </a>

A JavaScript function would change the href to a new URL with several parameters including the original URL, as soon as you start clicking on it.

JavaScript
import express from 'express'; const app = express(); app.get('/url', (req, res) => { // Record click and stuff... res.redirect(req.query); }); app.listen(3000);

So even though the browser would show the actual URL at the bottom-left on hover, once you clicked on it to copy, the href would change instantly.

Why mousedown over click? Probably because there won’t be a click event when users open the link in a new tab, which is something that happens quite often.

And so after right-clicking to copy like I did, mousedown would fire and the href would change, which would even update that preview URL at the bottom-left.

The new www.google.com/url page would log the visit and move out of the way so fast you’d barely notice it — unless your internet moves at snail speed.

They use this data for tools like Google Analytics and Search Console so site owners can improve the quality of their search results and pages by analyzing click-rate — something probably also using as a Search ranking factor. Not to mention recording clicks on Search ads to rake all the billions of yearly ad revenue.

Google Search Console. Source: Search Console Discover report now includes Chrome data

But Google got smarter.

They realized this URL tracking method had a serious issue for a certain group. For their users with slower internet speeds, the annoying redirect technique added a non-trivial amount of delay to the request and increased bounce rate.

So they did something new.

Now, instead of that cryptic www.google.com/url stuff, you get… the same exact URL?

With the <a> ping attribute, they have now successfully moved their tracking behind the scenes.

The ping attribute specifies one or more URLs that will be notified when the user visits the link. When a user opens the link, the browser asynchronously sends a short HTTP POST request to the URLs in ping.

The keyword here is asynchronously — www.google.com/url quietly records the click in the background without ever notifying the user, avoiding the redirect and keeping the user experience clean.

Browsers don’t visually indicate the ping attribute in any way to the user — a specification violation.

When the ping attribute is present, user agents should clearly indicate to the user that following the hyperlink will also cause secondary requests to be sent in the background, possibly including listing the actual target URLs.

HTML Standard (whatwg.org)

Not to mention a privacy concern, which is why browsers like Firefox refuse to enable this feature by default.

In Firefox Google sticks with the mousedown event approach:

There are many reasons not to disable JavaScript in 2023, but even if you do, Google will simply replace the href with a direct link to www.google.com/url.

HTML
<a href="/url?sa=t&source=web&rct=j&url=https://codingbeautydev.com/blog/vscode-tips-tricks..."> 10 essential VS Code tips and tricks for greater productivity </a>

So, there’s really no built-in way to avoid this mostly invisible tracking.

Even the analytics are highly beneficial for Google and site owners in improving result relevancy and site quality, as users we should be aware of the existence and implications of these tracking methods.

As technology becomes more integrated into our lives, we will increasingly have to choose between privacy and convenience and ask ourselves whether the trade-offs are worth it.

Stop doing this or nobody will understand your code

I was coding the other day and stumbled upon something atrocious.

Do you see it?

Let’s zoom in a bit more:

This line:

Please don’t do this in any language.

Don’t await properties.

You’re destroying your code readability and ruining the whole concept of OOP.

Properties are features not actions.

They don’t do like methods. They are.

They are data holders representing states of an object.

Simple states:

JavaScript
class Person { firstName = 'Tari'; lastName = 'Ibaba'; site = 'codingbeautydev.com'; }

Derived states — what getters are meant for:

JavaScript
class Person { firstName = 'Tari'; lastName = 'Ibaba'; site = 'codingbeautydev.com'; get fullName() { return `${this.firstName} ${this.lastName}`; } } const person = new Person(); console.log(person.fullName); // Tari Ibaba

But the status property was returning a Dart Future — JavaScript’s Promise equivalent:

❌Before:

JavaScript
class Permission { get status() { return new Promise((resolve) => { // resolve(); }); } } const notifications = new Permission(); await notifications.status;

It would have been so much better to use a method:

✅ After:

JavaScript
class Permission { getStatus() { return new Promise((resolve) => { // resolve(); }); } } const notifications = new Permission(); await notifications.getStatus();

And now async/await can make things even more intuitive:

JavaScript
class Permission { async getStatus() { // } } const notifications = new Permission(); await notifications.getStatus();

But guess what happens when you try async/await with properties?

Exactly. It’s a property.

This rule doesn’t just apply to async tasks, it applies to any long-running action, synchronous or not:

❌ Before:

JavaScript
class ActionTimer { constructor(action) { this.action = action; } // ❌ Property get time() { const then = Date.now(); for (let i = 0; i < 1000000; i++) { this.action(); } const now = Date.now(); return now - then; } } const splice = () => [...Array(100)].splice(0, 10); const actionTimer = new ActionTimer(splice); console.log(`[email protected]: ${actionTimer.time}`);

✅ After:

Let them know that the action is expensive enough to deserve caching or variable assignment:

JavaScript
class ActionTimer { constructor(action) { this.action = action; } // ✅ Get method getTime() { const then = Date.now(); for (let i = 0; i < 1000000; i++) { this.action(); } const now = Date.now(); return now - then; } } const splice = () => [...Array(100)].splice(0, 10); const actionTimer = new ActionTimer(splice); const theTime = actionTimer.getTime(); console.log(`[email protected]: ${theTime}`);

But sometimes it still doesn’t deserve to be a property with this.

Check this out — do you see the issue with the level setter property?

JavaScript
class Human { site = 'codingbeautydev.com'; status = ''; _fullness = 0; timesFull = 0; set fullness(value) { this._fullness = value; if (this._fullness <= 4) { this.status = 'hungry'; } else if (this._fullness <= 7) { this.status = 'okay'; } else { this.status = 'full'; timesFull++; } } } const human = new Human(); human.fullness = 5; console.log(`I am ${human.status}`);

It doesn’t just modify the backing _fullness field — it changes multiple other fields. This doesn’t make sense as a property, as data.

It’s affecting so much aside from itself.

It has side-effects.

Setting this property multiple times modifies the object differently each time.

JavaScript
const human = new Human(); human.fullness = 8; console.log(human.timesFull); // 1 human.fullness = 9; console.log(human.timesFull); // 2 console.log(`I am ${human.status}`);

So even though it doesn’t do much, it still needs to be a method.

JavaScript
class Human { site = 'codingbeautydev.com'; status = ''; _fullness = 0; setLevel(value) { this._fullness = value; if (this._fullness <= 3) { this.status = 'hungry'; } else if (this._fullness <= 7) { this.status = 'okay'; } else { this.status = 'full'; } } } const human = new Human(); human.setLevel(5); console.log(`I am ${human.status}`);

Name them right

Natural code. Coding like natural language.

So always name the properties with nouns like we’ve been doing here.

JavaScript
class ActionTimer { constructor(action) { this.action = action; } // ✅ Noun for property get time() { // ... } }

But you see what we did when it was time to make it a property?

We made it a verb. Cause now it’s an action that does something.

JavaScript
class ActionTimer { constructor(action) { this.action = action; } // ✅ verb phrase for method getTime() { // ... } }

Nouns for entities: variables, properties, classes, objects, and more.

Not this

JavaScript
// ❌ do-examples.ts // ❌ Cryptic const f = 'Coding'; const l = 'Beauty'; // ❌ Verb const makeFullName = `${f} ${l}`; class Book { // ❌ Adjectival phrase createdAt: Date; }

But this:

JavaScript
// ✅ examples.ts // ✅ Readable const firstName = 'Coding'; const lastName = 'Beauty'; // ✅ Noun const fullName = `${firstName} ${lastName}`; class Book { // ✅ Noun phrase dateCreated: Date; }

Verbs for actions: functions and object methods.

Key points: when to use a method vs a property?

Use a property when:

  • The action modifies or returns only the backing field.
  • The action is simple and inexpensive.

Use a method when:

  • The action modifies multiple fields.
  • The action is async or expensive.

All for clean, readable, intuitive code.

Nobody wants to use these Array methods😭

There’s so much more to arrays than map()filter()find(), and push() .

But most devs are completely clueless about this — several powerful methods they’re missing out on.

Check these out:

1. copyWithin()

Array copyWithin() copies a part of an array to another position in the same array and returns it without increasing its length.

JavaScript
const array = [1, 2, 3, 4, 5]; // copyWithin(target, start, end) // replace arr with start..end at target // a. target -> 3 (index) // b. start -> 1 (index) // c. end -> 3 (index) // start..end -> 2, 3 const result = array.copyWithin(3, 1, 3); console.log(result); // [1, 2, 3, 2, 3]

end parameter is optional:

JavaScript
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; // "end" not specified so last array index used // target -> 0 (index) // start..end -> 6, 7, 8, 9, 10 const result = array.copyWithin(0, 5); // [6, 7, 8, 9, 10, 6, 7, 8, 9, 10] console.log(result);
JavaScript
const array = [1, 2, 3, 4, 5]; // Copy numbers 2, 3, 4, and 5 (cut off at index 4) const result = array.copyWithin(3, 1, 6); console.log(result); // [1, 2, 3, 2, 3]

2. at() and with()

at() came first and with() came a year after that in 2023.

They are the functional and immutable versions of single-element array modification and access.

JavaScript
const colors = ['pink', 'purple', 'red', 'yellow']; console.log(colors.at(1)); // purple console.log(colors.with(1, 'blue')); // ['pink', 'blue', 'red', 'yellow'] // Original not modified console.log(colors); // ['pink', 'purple', 'red', 'yellow']

The cool thing about these new methods is how they let you get and change element values with negative indexing.

3. Array reduceRight() method

Works like reduce() but the callback goes from right to left instead of left to right:

JavaScript
const letters = ['b', 'e', 'a', 'u', 't', 'y']; const word = letters.reduce((word, letter) => word + letter, ''); console.log(word); // beauty // Reducer iterations // 1. ('', 'y') => '' + 'y' = 'y' // 2. ('y', 't') => 'y' + 't' = 'yt'; // 3. ('yt', 'u') => 'ytu'; // ... // n. ('ytuae', 'b') => 'ytuaeb'; const wordReversed = letters.reduceRight((word, letter) => word + letter, ''); console.log(wordReversed); // ytuaeb

Here’s another great scenario for reduceRight():

JavaScript
const thresholds = [ { color: 'blue', threshold: 0.7 }, { color: 'orange', threshold: 0.5 }, { color: 'red', threshold: 0.2 }, ]; const value = 0.9; const threshold = thresholds.reduceRight((color, threshold) => threshold.threshold > value ? threshold.color : color ); console.log(threshold.color); // red

4. Array findLast() method

New in ES13: find array item starting from last element.

Great for cases where where searching from end position produces better performance than with find()

Example:

JavaScript
const memories = [ // 10 years of memories... { date: '2020-02-05', description: 'My first love' }, // ... { date: '2022-03-09', description: 'Our first baby' }, // ... { date: '2024-01-25', description: 'Our new house' }, ]; const currentYear = new Date().getFullYear(); const query = 'unique'; const milestonesThisYear = events.find( (event) => new Date(event.date).getFullYear() === currentYear && event.description.includes(query) );

This works but as our target object is closer to the tail of the array, findLast() should run faster:

JavaScript
const memories = [ // 10 years of memories... { date: '2020-02-05', description: 'My first love' }, // ... { date: '2022-03-09', description: 'Our first baby' }, // ... { date: '2024-01-25', description: 'Our new house' }, ]; const currentYear = new Date().getFullYear(); const query = 'unique'; const milestonesThisYear = events.findLast( (event) => new Date(event.date).getFullYear() === currentYear && event.description.includes(query) );

Another use case for findLast() is when we have to specifically search the array from the end to get the correct element.

For example, if we want to find the last even number in a list of numbers, find() would produce a totally wrong result:

JavaScript
const nums = [7, 14, 3, 8, 10, 9]; // gives 14, instead of 10 const lastEven = nums.find((value) => value % 2 === 0); console.log(lastEven); // 14

But findLast() will start the search from the end and give us the correct item:

JavaScript
const nums = [7, 14, 3, 8, 10, 9]; const lastEven = nums.findLast((num) => num % 2 === 0); console.log(lastEven); // 10

5. toSorted(), toReversed(), toSpliced()

ES2023 came fully packed with immutable versions of sort(), reverse(), and splice().

Okay maybe splice() isn’t used as much as the others, but they all mutate the array in place.

JavaScript
const original = [5, 1, 3, 4, 2]; const reversed = original.reverse(); console.log(reversed); // [2, 4, 3, 1, 5] (same array) console.log(original); // [2, 4, 3, 1, 5] (mutated) const sorted = original.sort(); console.log(sorted); // [1, 2, 3, 4, 5] (same array) console.log(original); // [1, 2, 3, 4, 5] (mutated) const deleted = original.splice(1, 2, 7, 10); console.log(deleted); // [2, 3] (deleted elements) console.log(original); // [1, 7, 10, 4, 5] (mutated)

Immutability gives us predictable and safer code; debugging is much easier as we’re certain variables never change their value.

Arguments are exactly the same, with splice() and toSpliced() having to differ in their return value.

JavaScript
const original = [5, 1, 3, 4, 2]; const reversed = original.toReversed(); console.log(reversed); // [2, 4, 3, 1, 5] (copy) console.log(original); // [5, 1, 3, 4, 2] (unchanged) const sorted = original.toSorted(); console.log(sorted); // [1, 2, 3, 4, 5] (copy) console.log(original); // [5, 1, 3, 4, 2] (unchanged) const spliced = original.toSpliced(1, 2, 7, 10); console.log(spliced); // [1, 7, 10, 4, 5] (copy) console.log(original); // [5, 1, 3, 4, 2] (unchanged)

6. Array lastIndexOf() method

The lastIndexOf() method returns the last index where a particular element can be found in an array.

JavaScript
const colors = ['a', 'e', 'a', 'f', 'a', 'b']; const index = colors.lastIndexOf('a'); console.log(index); // 4

We can pass a second argument to lastIndexOf() to specify an index in the array where it should stop searching for the string after that index:

JavaScript
const colors = ['a', 'e', 'a', 'f', 'a', 'b']; // Get last index of 'a' before index 3 const index1 = colors.lastIndexOf('a', 3); console.log(index1); // 2 const index2 = colors.lastIndexOf('a', 0); console.log(index2); // 0 const index3 = colors.lastIndexOf('f', 2); console.log(index3); // -1

7. Array flatMap() method

The flatMap() method transforms an array using a given callback function and then flattens the transformed result by one level:

JavaScript
const arr = [1, 2, 3, 4]; const withDoubles = arr.flatMap((num) => [num, num * 2]); console.log(withDoubles); // [1, 2, 2, 4, 3, 6, 4, 8]

Calling flatMap() on the array does the same thing as calling map() followed by a flat() of depth 1, but it`s a bit more efficient than calling these two methods separately.

JavaScript
const arr = [1, 2, 3, 4]; // flat() uses a depth of 1 by default const withDoubles = arr.map((num) => [num, num * 2]).flat(); console.log(withDoubles); // [1, 2, 2, 4, 3, 6, 4, 8]

Final thoughts

They are not that well-known (yet) but they have their unique uses and quite powerful.