featured

The 5 most amazing new JavaScript features in 2024

2024 has been an incredible year of brand new JS feature upgrades with ES15 and promising proposals.

From sophisticated async features to syntactic array sugar and modern regex, JavaScript coding has become easier and faster than ever.

1. Native array group-by is here

Object.groupBy():

JavaScript
const fruits = [ { name: 'pineapple🍍', color: '🟡' }, { name: 'apple🍎', color: '🔴' }, { name: 'banana🍌', color: '🟡' }, { name: 'strawberry🍓', color: '🔴' }, ]; const groupedByColor = Object.groupBy( fruits, (fruit, index) => fruit.color ); console.log(groupedByColor);

Literally the only thing keeping dinosaur Lodash alive — no more!

I was expecting a new instance method like Array.prototype.groupBy but they made it static for whatever reason.

Then we have Map.groupBy to group with object keys:

JavaScript
const array = [1, 2, 3, 4, 5]; const odd = { odd: true }; const even = { even: true }; Map.groupBy(array, (num, index) => { return num % 2 === 0 ? even: odd; }); // => Map { {odd: true}: [1, 3, 5], {even: true}: [2, 4] }

Almost no one ever groups arrays this way though, so will probably be far less popular.

2. Resolve promise from outside — modern way

With Promise.withResolvers().

It’s very common to externally resolve promises and before we had to do it with a Deferred class:

JavaScript
class Deferred { constructor() { this.promise = new Promise((resolve, reject) => { this.resolve = resolve; this.reject = reject; }); } } const deferred = new Deferred(); deferred.resolve();

Or install from NPM — one more dependency.

But now with Promise.withResolvers():

JavaScript
const { promise, resolve, reject } = Promise.withResolvers();

See how I use it to rapidly promisify an event stream — awaiting an observable:

JavaScript
// data-fetcher.js // ... const { promise, resolve, reject } = Promise.withResolvers(); export function startListening() { eventStream.on('data', (data) => { resolve(data); }); } export async function getData() { return await promise; } // client.js import { startListening, getData } from './data-fetcher.js'; startListening(); // ✅ listen for single stream event const data = await getData();

3. Buffer performance upgrades

Buffers are tiny data stores to store temporary data your app generates.

They make it incredible easy to transfer and process data across various stages in a pipeline.

Pipelines like:

  • File processing: Input file buffer process new buffer output file
  • Video streaming: Network response buffer display video frame
  • Restaurant queues: Receive customer queue/buffer serve customer
JavaScript
const fs = require('fs'); const { Transform } = require('stream'); const inputFile = 'input.txt'; const outputFile = 'output.txt'; const inputStream = fs.createReadStream(inputFile, 'utf-8'); const transformStream = new Transform({ transform(chunk) { // ✅ tranform chunks from buffer }, }); const outputStream = fs.createWriteStream(outputFile); // ✅ start pipeline inputStream.pipe(transformStream).pipe(outputStream);

With buffers, each stage process data at different speeds independent of each other.

But what happens when the data moving through the pipeline exceeds the buffers capacity?

Before we’d have to copy all the current data’s buffer to a bigger buffer.

Terrible for performance, especially when there’s gonna be a LOT of data in the pipeline.

ES15 gives us a solution to this problem: Resizable array buffers.

JavaScript
const resizableBuffer = new ArrayBuffer(1024, { maxByteLength: 1024 ** 2, }); // ✅ resize to 2048 bytes resizableBuffer.resize(1024 * 2);

4. Asynchronous upgrades

Atomics.waitAsync(): Another powerful async coding feature in ES2024:

It’s when 2 agents share a buffer…

And agent 1 “sleeps” and waits for agent 2 to complete a task.

When agent 2 is done, it notifies using the shared buffer as a channel.

JavaScript
const sharedBuffer = new SharedArrayBuffer(4096); const bufferLocation = new Int32Array(sharedBuffer); // ✅ initial value at buffer location bufferLocation[37] = 0x1330; async function doStuff() { // ✅ agent 1: wait on shared buffer location until notify Atomics.waitAsync(bufferLocation, 37, 0x1330).then((r) => {} /* handle arrival */); } function asyncTask() { // ✅ agent 2: notify on shared buffer location const bufferLocation = new Int32Array(sharedBuffer); Atomics.notify(bufferLocation, 37); }

You’d be absolutely right if you thought this similar to normal async/await.

But the biggest difference: The 2 agents can exist in completely different code contexts — they only need access to the same buffer.

And: multiple agents can access or wait on the shared buffer at different times — and any one of them can notify to “wake up” all the others.

It’s like a P2P network. async/await is like client-server request-response.

JavaScript
const sharedBuffer = new SharedArrayBuffer(4096); const bufferLocation = new Int32Array(sharedBuffer); bufferLocation[37] = 0x1330; // ✅ received shared buffer from postMessage() const code = ` var ia = null; onmessage = function (ev) { if (!ia) { postMessage("Aux worker is running"); ia = new Int32Array(ev.data); } postMessage("Aux worker is sleeping for a little bit"); setTimeout(function () { postMessage("Aux worker is waking"); Atomics.notify(ia, 37); }, 1000); }`; async function doStuff() { // ✅ agent 1: exists in a Worker context const worker = new Worker( 'data:application/javascript,' + encodeURIComponent(code) ); worker.onmessage = (event) => { /* log event */ }; worker.postMessage(sharedBuffer); Atomics.waitAsync(bufferLocation, 37, 0x1330).then( (r) => {} /* handle arrival */ ); } function asyncTask() { // ✅ agent 2: notify on shared buffer location const bufferLocation = new Int32Array(sharedBuffer); Atomics.notify(bufferLocation, 37); }

5. Regex v flag & set operations

A new feature to make regex patters much more intuitive.

Finding and manipulating complex strings using expressive patterns — with the help of set operations:

JavaScript
// A and B are character class, like [a-z] // difference: matches A but not B [A--B] // intersection: matches both A & b [A&&B] // nested character class [A--[0-9]]

To match ever-increasing sets of Unicode characters, like:

  • Emojis: 😀, ❤️, 👍, 🎉, etc.
  • Accented letters: é, à, ö, ñ, etc.
  • Symbols and non-Latin characters: ©, ®, €, £, µ, ¥, etc

So here we use Unicode regex and the v flag to match all Greek letters:

JavaScript
const regex = /[\p{Script_Extensions=Greek}&&\p{Letter}]/v;

Final thoughts

Overall 2024 was significant leap for JavaScript with several features essential for modern development.

Empowering you to write cleaner code with greater conciseness, expressiveness, and clarity.

Incredible infinite scroll with JavaScript

Infinite scroll: Loading more and more content as the user scrolls down to the end.

No need for pagination + increases time spent on the site

With simple JavaScript we can recreate this easily:

We start with the basic HTML:

HTML
<div id="load-trigger-wrapper"> <!-- Grid of images> <div id="image-container"></div> <!-- Intersection Observer observes this --> < <div id="load-trigger"></div> </div> <!-- Number of loading images --> <div id="bottom-panel"> Images: &nbsp;<b><span id="image-count"></span> &nbsp;</b>/ &nbsp;<b><span id="image-total"></span></b> </div>

Now it’s time to detect scrolling to the end with the Intersection Observer API:

JavaScript
const loadTrigger = document.getElementById('load-trigger'); // ... const observer = detectScroll(); // ... // Detect when function detectScroll() { const observer = new IntersectionObserver( // Callback also runs after observe() (entries) => { for (const entry of entries) { // ... loadMoreImages(); // ... } }, // Set "rootMargin" because of #bottom-panel height // 30px upwards from the bottom { rootMargin: '-30px' } ); // Start watching #load-trigger div observer.observe(loadTrigger); return observer; }

Now let’s show the initial skeleton images:

JavaScript
const imageClass = 'image'; const skeletonImageClass = 'skeleton-image'; // ... // This function would make requests to an image server function loadMoreImages() { const newImageElements = []; // ... for (let i = 0; i < amountToLoad; i++) { const image = document.createElement('div'); // 👇 Display each image with skeleton-image class image.classList.add(imageClass, skeletonImageClass); // Include image in container imageContainer.appendChild(image); // Store in temp array to update with actual image when loaded newImageElements.push(image); } // ... }
CSS
.image, .skeleton-image { height: 50vh; border-radius: 5px; border: 1px solid #c0c0c0; /* Three per row, with space for margin */ width: calc((100% / 3) - 24px); /* Initial color before loading animation */ background-color: #eaeaea; /* Grid spacing */ margin: 8px; /* Fit into grid */ display: inline-block; } .skeleton-image { transition: all 200ms ease-in; /* Contain ::after element with absolute positioning */ position: relative; /* Prevent overflow from ::after element */ overflow: hidden; } .skeleton-image::after { content: ""; /* Cover .skeleton-image div*/ position: absolute; top: 0; right: 0; bottom: 0; left: 0; /* Setup for slide-in animation */ transform: translateX(-100%); /* Loader image */ background-image: linear-gradient(90deg, rgba(255, 255, 255, 0) 0, rgba(255, 255, 255, 0.2) 20%, rgba(255, 255, 255, 0.5) 60%, rgba(255, 255, 255, 0)); /* Continue animation until image load*/ animation: load 1s infinite; } @keyframes load { /* Slide-in animation */ 100% { transform: translateX(100%) } }

Update skeleton images

We get colors instead of images.

JavaScript
function loadMoreImages() { // ... // Create skeleton images and stored them in "newImageElements" variable // Simulate delay from network request setTimeout(() => { // Colors instead of images const colors = getColors(amountToLoad); for (let i = 0; i < colors.length; i++) { const color = colors[i]; // 👇 Remove skeleton loading indicator and show color newImageElements[i].classList.remove(skeletonImageClass); newImageElements[i].style.backgroundColor = color; } }, 2000); // ... } function getColors(count) { const result = []; let randUrl = undefined; while (result.length < count) { // Prevent duplicate images while (!randUrl || result.includes(randUrl)) { randUrl = getRandomColor(); } result.push(randUrl); } return result; } function getRandomColor() { const h = Math.floor(Math.random() * 360); return `hsl(${h}deg, 90%, 85%)`; }

Stop infinite scroll

This is a demo so we’ll have a artificial number of images like 50.

JavaScript
const imageCountText = document.getElementById('image-count'); // ... let imagesShown = 0; // ... function loadMoreImages() { // ... const amountToLoad = Math.min(loadLimit, imageLimit - imagesShown); // Load skeleton images... // Update skeleton images... // Update image count imagesShown += amountToLoad; imageCountText.innerText = imagesShown; if (imagesShown === imageLimit) { observer.unobserve(loadTrigger); } }

Optimize performance with throttling

By using a throttle function to only allow new loadings within a certain time.

JavaScript
let throttleTimer; // Only one image batch can be loaded within a second const throttleTime = 1000; // ... function throttle(callback, time) { // Prevent additional calls until timeout elapses if (throttleTimer) { console.log('throttling'); return; } throttleTimer = true; setTimeout(() => { callback(); // Allow additional calls after timeout elapses throttleTimer = false; }, time); }

By calling throttle() in the Intersection Observer’s callback with a time of 1000, we ensure that loadMoreImages() is never called multiple times within a second.

JavaScript
function detectScroll() { const observer = new IntersectionObserver( (entries) => { // ... throttle(() => { loadMoreImages(); }, throttleTime); } } }, // ... ); // ... }

OpenAI’s new AI agent will change everything

The new OpenAI operator agent will change the world forever.

This is going to be a real AI agent that actually works — unlike gimmicks like AutoGPT.

Soon AI will be able to solve complex goals with lots of interconnected steps.

Completely autonomous — no continuous prompts — zero human guidance apart from dynamic input for each step.

Imagine you could just tell ChatGPT “teach me French” and that’ll be all it needs…

  • Analyzing your French level with a quick quiz
  • Crafting a comprehensive learning plan
  • Setting phone and email reminders to help you stick to your plan…
Not quite there yet 😉

This is basically the beginning of AGI — if it isn’t already.

And when you think of it this is already what apps like Duolingo try to do — solving complex problems.

But an AI agent will do this in far more comprehensive and personalized way — intelligently adjusting to the user’s needs and changing desires.

You can say something super vague like “plan my next holiday” and instantly your agent gets to work:

  • Analyzes your calendar to know the best next holiday time
  • Figures out someone you’ll love from previous conversations that stays within your budget
  • Books flights and sets reservations according to your schedule

This will change everything.

Which is why they’re not the only ones working on agents — the AI race continues…

We have Google apparently working on “Project Jarvis” — an AI agent to automate web-based tasks in Chrome.

Automatically jumping from page to page and filling out forms and clicking buttons.

Maybe something like Puppeteer — a dev tool programmers use to make the browser do stuff automatically — but it isn’t hard-coded and it’s far more flexible.

Anthropic has already released their own AI agent in Claude 3.5 Sonnet — a groundbreaking “computer use” feature.

Google and Apple will probably have a major advantage over OpenAI and Anthropic though — cause of Android and iOS.

Gemini Android and Apple Intelligence could seamlessly switch between all your mobile apps for a complex chain of actions.

Since they have deep access to the OS they could even use the apps without having to open them visually.

They’ll control system settings.

You call the Apple Intelligence agent, “Send a photo of a duck to my Mac”, and it’ll generate an image of a duck, turn on Airdrop on iPhone, send the photo and turn Airdrop back off.

But the most power all these agents will have comes from the API interface — letting you build tools to plug into the agent.

Like you can create a “Spotify” tool that’ll let you play music from the agent. Or a “Google” tool to check your mails and plan events with your calendar.

So it all really looks promising — and as usual folks like Sam Altman are already promising the world with it.

AI agents may well be the future—personalized, autonomous, and powerful. They’ll revolutionize how we learn, plan, and interact. The race is on.

We may see devastating job-loss impacts in several industries — including software development…

Let’s see how it goes.

Bye bye Apple Intelligence — Gemini for iPhone is amazing 😲

Apple Intelligence isn’t coming to like 90% of iPhones and everyone is pissed…

So no better time for Google to jump on this and finally push out their chatbot for iPhone.

And they didn’t disappoint on features — live conversation, deep app integration, stunning image generation…

I’ve been using the web app ever since they were calling Bard and it’s been great so I was pretty keen on this.

Gemini isn’t aMaZiNg, but it works well for getting up-to-date info, unlike ChatGPT that’s stuck in April 2023 and doesn’t known when to search the web.

Impressively Gemini has already skyrocketed to the top of charts in Productivity.

Or maybe not so impressive since it’s Google and it’s only #2 — but then #1 is ChatGPT so…

The first thing I noticed is the clean minimalist interface, quite similar to the one on desktop.

You have more than enough space for core things like input.

It searches the entire web but responds incredibly quickly to give detailed answers to complex questions.

Spark your creativity and generate ideas for stories, poems, or scripts.

The image generation has drastically improved with their new Imagen 3 engine:

Refine images easily — but it’s changing it other parts of the image:

Gemini Live is a powerful standout feature, enabling real-time interactive conversations.

It provides context-based responses and adapts to the flow of the conversation.

Looks like they did their based to simulate a live conversation with a human — no chat or text feedback confirming what you say. Although they save the chat history.

The voices aren’t robotic and have decent intonation.

One cool thing is it intelligently figures out how when you’re not done speaking — like in a real convo.

Me: What is the umm…

Gemini: What is the what?

Me: I don’t know

Gemini: That’s totally fine…

Me: Shut up

Gemini: I’m sorry I’m not comfortable engaging in conversation that’s disrespectful

Me: I love you

Gemini: Thank you for your kind words…

You can control it from the Notification Centre.

It integrates seamless with all your other Google services — I asked it to check my latest email and it was on point.

Final thoughts

It’ll be exciting to see what Gemini can do in the future.

Of course it has no chance with deep integration with iOS like Apple Intelligence.

But it’s a versatile and intelligent AI Assistant worth checking out.

The new M4 Macbook Pro is a MONSTER

This beast just got upgraded to the M4 chip and it’s more dangerous than ever.

This is the greatest value for money you’ll ever get in a Macbook Pro.

Especially as Apple finally caved to give us 16 GB RAM in base models for the same price.

The unbelievable performance of the M4 chip makes it every bit as deadly as the new M4 Mac Mini and iMac — yet ultraportable and lightweight.

Starting with a 10-core CPU and 10-core GPU.

Image source: theverge.com

Can’t compare with the Macbook Air on portability tho — lightest PC I’ve ever felt.

Apple says M4 is up to 3 times faster than M1 — but M1 is still very good so don’t go rushing to throw your money at Tim Cook.

In practice you’ll probably only notice a real difference for long tasks in heavy apps — like the Premier Pro 4K export in the benchmarks we just saw.

M4 is also an unbelievable 27 times faster than Intel Core i7 in tasks like video processing:

Core i7 used to be a big deal back then!

So imagine how much faster M4 Pro Max would be than the Intel Core i7?

Yeah their M4 processor comes in 3 tiers: M4, M4 Pro, and M4 Max.

The base model also comes with much as 3 Thunderbolt ports — unlike the 2 in previous base models.

Thunderbolt ports looks just like USB-C but with much faster data transfer speeds — and obviously light years ahead of USB-A.

With Thunderbolt 5 you get incredible speeds of up to 120Gb/s — available in Pro models with M4 Pro and M4 Max.

Along with a standard MagSafe 3 charging port and an SXDC card slot to easily import images from digital cameras.

Plus a headphone jack and a HDMI port.

Definitely geared for the pros, and pretty packed compared to the ultraminimalist MacBook Air:

Btw every wondered why Apple still puts headphone jacks in Macs?

It’s because wired headphones give maximum audio quality and have zero lag — something that’s essential for the Pros. Perfect audio.

And perfect video too — with a sophisticated 12 mega-pixel Center Stage camera.

Center Stage makes sure you’re always at the center of the recording even as you move around.

Crystal clear Liquid Retina display in two great sizes

  • 14 inch — 3024 x 1964
  • 16 inch — 3456 x 2234

imo Take 14 over 16 — It’s more than enough screen space.

You know I thought the 13 inch MacBook Air would be small but it turned out perfect and I was happy not to go with the 15.

My previous 15.6″ PC now seems humongous and too much for a laptop. 16″ seems insane.

Better you get a huge external monitor:

But on it’s own, it’s a monstrously good laptop for coding and other heavy tasks:

The base plan starts at $1599 for 16 GB RAM and 512 GB SSD with M4 chip with many lethal upgrade options:

  • M4 16 GB RAM and 1 TB SSD — $1799
  • M4 24 GB RAM and 1 TB SSD — $1999
  • M4 Pro 24 GB and 512 GB SSD — $1999
  • M4 Pro 24 GB and 1 TB SSD — $2399
  • M4 Max 36 GB RAM and 1 TB SSD — $3199 🤯

Overall the M4 MacBook Pro strikes the perfect balance of power, sleek design, and value, making it an excellent choice for professionals seeking the ideal portable workstation.

Microsoft is getting even more desperate with AI 🤦‍♂️

Microsoft going all in on the AI bandwagon…

Bing, Edge, Windows… now it’s Notepad’s turn.

Image source: bleepingcomputer.com

“Custom rewrite” — tweak tone, format and length:

Image source: theverge.com

Not bad but I doubt most people will use it.

Most people just use Notepad as a simple text editor to hold temp info and other super short-term stuff… not for this.

And wouldn’t it have been much better if it was just a text input to rewrite the text however we want flexibly?

Even good old Paint will be getting AI soon — “generative erase” (lol)

So you can remove any object from the photo and it’ll automagically create a seamless background.

❌ Before erase:

Image source: bleepingcomputer.com

✅ After erase:

Image source: bleepingcomputer.com

Two more for the growing list of MS products possessed with the AI spirit.

Even their Surface devices are all about AI now:

Even their Android keyboard app 😂

Remember this?

When Bing Chat first came out — an interesting chatbot getting a lot of attention that could have finally made a dent in Google’s numbers.

Only for them to brutally degrade it into your everyday chatbot.

Then they brought their annoying Copilot button to Edge — one more setting to change whenever I newly install it.

Then they brazenly replaced the NEW TAB button with this garbage in their mobile apps. That was the last straw for me — no more Edge on Android/iOS.

Imagine depriving users of easy access to such a fundamental action in a browser because of AI.

Imagine the horror of a Camera app where you see a Copilot button where the Snap button should be.

Luckily I don’t use Windows anymore so I won’t have to deal with their Copilot in Windows garbage:

And their aggressive marketing has certainly robbed a lot of people the wrong way — like how they did when Edge went Chromium.

Lol… someone was mad.

It’s just insane how many companies jumped on the AI bandwagon ever since ChatGPT.

Notion, Spotify, Zapier, Canva… even Apple finally caved.

Everything is AI now. Even the most mundane procedural algorithm to automate something is AI lol.

No doubt many AI upgrades have been like the recent Google Search Gen AI — that probably decimated traffic of millions of sites out there.

But a great of them add very little value — clearly just to prey on the emotions of users and investors.

But one is certain, this AI hype isn’t stopping anytime soon.

Let’s see how long until the so-called AGI comes around.

New JavaScript pipeline operator: transform anything into a one-liner

With the pipeline operator you’ll stop writing code like this:

JavaScript
const names = ['USA', 'Australia', 'CodingBeauty']; const lowerCasedNames = names.map((name) => name.toLowerCase() ); const hyphenJoinedNames = lowerCasedNames.join('-'); const hyphenJoinedNamesCapitalized = capitalize( hyphenJoinedNames ); const prefixedNames = `Names: ${hyphenJoinedNamesCapitalized}`; console.log(prefixedNames); // Names: Usa-australia-codingbeauty

and start writing code like this:

JavaScript
// Hack pipes: | and > ['USA', 'Australia', 'CodingBeauty'] |> %.map((name) => name.toLowerCase()) |> %.join('-') |> capitalize(%) |> `Names: ${%}` |> console.log(%); // Names: Usa-australia-codingbeauty

So refreshingly clean — and elegant! All those temporary variables are gone — not to mention the time it took to come up with those names *and* type them (not everyone types like The Flash, unfortunately).

You may have heard this partially true quote attributed to Phil Karlton: “There are only two hard things in computer science: cache invalidation and naming things“.

Using the JavaScript pipeline operator clears out the clutter to boost readability and write data transforming code (basically all code) in a more intuitive manner.

Verbosity should be avoided as much as possible, and this works so much better to compact code than reusing short-named variables:

JavaScript
let buffer = await sharp('coding-beauty-v1.jpg').toBuffer(); let image = await jimp.read(buffer); image.grayscale(); buffer = await image.getBufferAsync('image/png'); buffer = await sharp(buffer).resize(250, 250).toBuffer(); image = await jimp.read(buffer); image.sepia(); buffer = await image.getBufferAsync('image/png'); await sharp(buffer).toFile('coding-beauty-v1.png');

Hopefully, almost no one codes like this on a regular basis. It’s a pretty horrible technique when done in a large scale; a perfect example showing why we embrace immutability and type systems.

Unlike the pipeline operator, there’s no certainty that the variable always contains the value you set at any given point; you’ll need to climb up the scope to look for re-assignments. We can have used the _ at an earlier point in the code; the value it has at various points in the code is simply not guaranteed.

Now we’re just using an underscore, so without checking out the right-hand side of those re-assignments you can’t quickly know what the type of the variable is, unless you have a smart editor like VS Code (although I guess you could say that doesn’t matter since they’re supposed to be “temporary” — at least until they’re not!).

All in all, poor readability. Fragile and Unstable. 5 times harder for someone new to understand. Also, some would say underscores are “ugly”, especially in languages like JavaScript where they hardly show up.

JavaScript
// setup function one() { return 1; } function double(x) { return x * 2; } let _; _ = one(); // is now 1. _ = double(_); // is now 2. Promise.resolve().then(() => { // This does *not* print 2! // It prints 1, because '_' is reassigned downstream. console.log(_); }); // _ becomes 1 before the promise callback. _ = one(_);

Okay, so why don’t we just avoid this infestation of temporary underscores, and nest them into one gigantic one-liner?

JavaScript
await sharp( jimp .read( await sharp( await jimp .read( await sharp('coding-beauty-v1.jpg').toBuffer() ) .grayscale() .getBufferAsync('image/png') ) .resize(250, 250) .toBuffer() ) .sepia() .getBufferAsync('image/png') ).toFile('coding-beauty-v2.png');

It’s a mess. The underscore is gone, but who in the world can understand this at a glance? How easy is it to tell how the data flows throughout this code, and make any necessary adjustments.

Understanding, at a glance — this is what we should strive for with every line of code we write.

The pipeline operator greatly outshines every other method, giving us both freedom from temporary variables and readability. It was designed for this.

JavaScript
// We don't need to wrap 'await' anymore await sharp('coding-beauty-v1.jpg').toBuffer() |> jimp.read(%) |> %.grayscale() |> await %.getBufferAsync('image/png') |> await sharp(%).resize(250, 250).toBuffer() |> await jimp.read(%) |> %.sepia() |> await %.getBufferAsync('image/png') |> await sharp(%).toFile('coding-beauty-v2.png');

Here the % only exists within this particular pipeline.

Method chaining?

Who hasn’t used and combined heavily popular array methods like map, filter, and sort? Very hard to avoid in applications involving any form of list manipulation.

JavaScript
const numbers = '4,2,1,3,5'; const result = numbers .split(',') .map(Number) .filter((num) => num % 2 === 0) .map((num) => num * 2) .sort(); // [4, 8]

This is actually great. There aren’t any temporary variables or unreadable nesting here either and we can easily follow the chain from start to finish.

The formatting lets us easily add more methods at any point in the chain; feature-packed editor like VS Code can easily swap the processing order of two methods, with the Ctrl + Up and Ctrl + Down shortcuts.

There’s a reason why libraries like core http and jQuery are designed like this:

JavaScript
const http = require('http'); http .createServer((req, res) => { console.log('Welcome to Coding Beauty'); }) .on('error', () => { console.log('Oh no!'); }) .on('close', () => { console.log('Uuhhm... bye!'); }) .listen(3000, () => { console.log('Find me on port 3000'); });

The problem with method chaining is that we can’t use it everywhere. If the class wasn’t designed like that we’re stuck and out in the cold.

It doesn’t work very well with generator methods, async/await and function/method calls outside the object, like we saw here:

JavaScript
await sharp( // 3-method chain, but not good enough! jimp .read( await sharp( // Same here await jimp .read( await sharp('coding-beauty-v1.jpg').toBuffer() ) .grayscale() .getBufferAsync('image/png') ) .resize(250, 250) .toBuffer() ) .sepia() .getBufferAsync('image/png') ).toFile('coding-beauty-v2.png');

But all this and more work with the pipeline operator; even object literals and async import function.

JavaScript
await sharp('coding-beauty-v1.jpg').toBuffer() |> jimp.read(%) |> %.grayscale() |> await %.getBufferAsync('image/png') |> await sharp(%).resize(250, 250).toBuffer() |> await jimp.read(%) |> %.sepia() |> await %.getBufferAsync('image/png') |> await sharp(%).toFile('coding-beauty-v2.png');

Could have been F# pipes

We would have been using the pipeline operator very similarly to F# pipes, with the above turning out like this instead:

JavaScript
(await sharp('coding-beauty-v1.jpg').toBuffer()) |> (x) => jimp.read(x) |> (x) => x.grayscale() |> (x) => x.getBufferAsync('image/png') |> await |> (x) => sharp(x).resize(250, 250).toBuffer() |> await |> (x) => jimp.read(x) |> await |> (x) => x.sepia() |> (x) => x.getBufferAsync('image/png') |> await |> (x) => sharp(x).toFile('coding-beauty-v2.png'); |> await

There was an alternative design. But you can already see how this makes for an inferior alternative: Only single-function arguments are allowed and the operation is more verbose. Unless the operation is already a single-argument function call.

It’s weird handling of async/await was also a key reason why it got rejected — along with memory usage concerns. So, forget about F# pipes in JS!

Use the pipeline operator right now

Yes you can — with Babel.

Babel has a nice habit of implementing features before they’re officially integrated in the language; it did this for top-level await, optional chaining, and many others. The pipeline operator couldn’t be an exception.

Just use the @babel/plugin-proposal-pipeline-operator plugin and you’re good to.

It’s optional of course — but not for long.

Prettier the code formatter is already prepared.

Even though we can’t say the same about VS Code or Node.js.

Right now there’s even speculation that % won’t be the final symbol pass around in the pipeline; let’s watch and see how it all plays out.

Final thoughts

It’s always great to see new and exciting features come to the language. With the JavaScript pipeline operator, you’ll cleanse your code of temporary variables and cryptic nesting, greatly boost code readability, efficiency, and quality.

The new M4 Mac mini is a MONSTER

They call it mini but what it can do is far from mini.

Only 5 x 5 x 2 inches and 1.5 pounds That’s mega-light.

Yet the M4 chip makes it as dangerous as the new MacBook Pro — even though it costs much less.

Image source: verge.com

And just look at the ports:

Image source: apple.com

And you know I saw this pic on their website and was like, What the hell is this?

Then I saw this:

Image source: apple.com

Ohhh… it’s a CPU — no a system unit…

It’s a “pure” computer with zero peripherals — not even a battery. You’re buying everything yourself.

Definitely dramatically superior to the gigantic system unit I used when I was younger.

But I didn’t think this was still a huge thing. Especially with integrated screens like the iMac.

Mac Mini is like the complete opposite of the iMac — a gigantic beast that comes with everything…

Image source: apple.com

iMac gives you predictability — no analysis paralysis in getting all your parts (although you can just buy apple anyways)

Image source: apple.com

Mac Mini is jam-packed with ports:

On the front we’ve got two 10 Gbps USB-C ports and a headphone jack:

Image source: apple.com

Back ports:

Lovely crisp icons indicate what they’re each for…

Image source: apple.com

But they put the power button at the bottom — dumb move!

You’ll have to raise it up any time you want to on it.

Wouldn’t it have been cool if instead they made the power huge to cover the bottom completely — so you’d just have to push it down like those red buttons in game shows?

But once it’s all powered up the possibilities are endless:

From basic typing to heavyweight gaming — like Apple Arcade stuff:

Image source: apple.com

And coding of course:

Image source: apple.com

And with an improved thermal system, Mac Mini can handle all these demanding tasks quietly:

Image source: apple.com

The base plan starts at $599 for 16GB RAM and 256 GB SSD with M4 Pro, you can pay for higher configs like other Mac devices allow:

  • 16 GB RAM and 256 GB SSD – $799
  • 24 GB RAM and 512 GB SSD – $999

And then there’s the M4 Pro — 24 GB RAM and 512 GB SSD for $1399.

Overall the M4 Mac Mini is a perfect blend of power, compact design, and value, great for professionals looking for the ideal desktop workstation.

This amazing service saves you from wasting money on a new PC

Shadow PC saves you from wasting thousands of dollars on a new PC.

A fully customizable computer in the cloud with amazing capabilities.

Built to handle heavyweight work: from hardcore gaming to video editing to game dev.

A Windows you can take anywhere you go. Install whenever you want — if it runs on Windows, it runs on Shadow.

❌ Before:

You spend hours searching for the perfect PC to buy with specs that meet your needs and also stays within budget.

You empty your wallet and waste more time ordering it online or checking out your nearby stores.

Then you waste more money on data to download everything you need to finally get started.

✅ Now:

Join Shadow and get cloud PC instantly.

Install everything with lightning-fast Internet speeds of over 1 Gbps:

Done.

And this Internet has nothing to do with your data plan — You only need data to stream the screen to yours — all the uploads and downloads are done on the remote PC with zero cost to you.

Lightweight and straightforward — open the Shadow app and you get to the desktop in less than a minute.

Turn it off and come back whenever to pick right where you left off.

Play hardcore CPU-intensive games without making a dent in your system resources or storage space. Your PC fan will be super silent and your CPU will be positively bored out of its mind with idleness.

Make it full-screen and enjoy a seamless, immersive experience.

When I was using it on a Windows PC there were times when I didn’t even know which was which. Cause it’s literally just Windows — no curated interface like in some gaming services.

It’s also got apps for Mac, Android, iOS, and Linux.

Including a convenient browser-based mode for quick and easy access:

Cost?

So there are two pricing tiers — Shadow Gaming for gaming and Shadow Pro for professional work like video editing.

For just $10 a month get a powerful 3.1GHz processor with 6 GB of RAM and a generous 5 TB of HDD storage, AND 256 GB SSD storage!

Easily capable of Fortnite, Minecraft, and many other popular games.

You also get the 1 Gb/s download bandwidth guaranteed.

Upgrading to the Boost plan will get you an additional 6 GB RAM and a 256 GB SSD for $30 a month.

And then there’s the most powerful Power plan for even more… POWER.

Shadow Pro’s pricing is a bit different.

The plan names are typical and boring, the starting plan is cheaper. I went with Standard and it was great.

This is amazing! How do I get started?

Just head over to shadow.tech and create an account:

After subscribing to a plan they’ll start setting up your cloud PC right away. Looks like they do it manually so it’ll take anywhere from 30-60 minutes to complete.

The email they sent me:

Install the app and sign in.

START NOW and start enjoying your personal computer in the cloud.

Final thoughts

New Gemini 1.5 FLASH model: An absolute Google game changer

So Google has finally decided to show OpenAI who the real king of AI is.

Their new Gemini 1.5 Flash model blows GPT-4o out of the water and the capabilities are hard to believe.

Lightning fast.

33 times cheaper than GPT-4o but has a 700% greater context — 1 million tokens.

What is 1 million tokens in the real-world? Approximately:

  • Over an 1 hour of video
  • Over 30,000 lines of code
  • Over 700,000 words

❌GPT-4o cost:

  • Input: $2.50 per million tokens
  • Output: $10 per million tokens
  • Cached input: $1.25 per million tokens

✅ Gemini 1.5 Flash cost:

  • Input: $0.075 per million tokens
  • Output: $0.30 per million tokens
  • Cached input: $0.01875 per million tokens

And then there’s the mini Flash-8B version for cost-efficient tasks — 66 times cheaper:

And the best part is the multi-modality — it can reason with text, files, images and audio in complex integrated ways.

And 1.5 Flash has almost all the capabilities of Pro but much faster. And as a dev you can start using them now.

Gemini 1.5 Pro was tested with a 44-minute silent movie and astonishingly, it easily analyzed the movie into various plot points and events. Even pointing out tiny details that most of us would miss on first watch.

Meanwhile the GPT-4o API only lets you work with text and images.

You can easily create, test and refine prompts in Google’s AI Studio — completely free.

It doesn’t count in your billing like in OpenAI playground.

Just look at the power of Google AI Studio — creating a food recipe based on an image:

I uploaded this delicious bread from gettyimages:

Now:

What if I want the response to be a specialized format for my API or something?

Then you can just turn on JSON mode and specify the response schema:

OpenAI playground has this too, but it’s not as intuitive to work with.

Another upgrade Gemini has over OpenAI is how creativity it can be.

In Gemini you can increase the temperature from 0 to 200% to control how random and creative the responses are:

Meanwhile in OpenAI if you try going far beyond 100%, you’ll most likely get a whole literal load of nonsense.

And here’s the best part — when you’re done creating your prompt you can just use Get code — easily copy and paste the boilerplate API code and move lightning-fast in your development.

Works in several languages including Kotlin, Swift and Dart — efficient AI workflow in mobile dev.

In OpenAI playground you can get the code for Python and JavaScript.

Final thoughts

Gemini 1.5 Flash is a game-changer offering unparalleled capabilities at a fraction of the cost.

With its advanced multi-modality ease of use, generous free pricing, and creative potential it sets a new standard for AI leaving GPT-4o in the dust.