Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

Bye bye Apple Intelligence — Gemini for iPhone is amazing 😲

Apple Intelligence isn’t coming to like 90% of iPhones and everyone is pissed…

So no better time for Google to jump on this and finally push out their chatbot for iPhone.

And they didn’t disappoint on features — live conversation, deep app integration, stunning image generation…

I’ve been using the web app ever since they were calling Bard and it’s been great so I was pretty keen on this.

Gemini isn’t aMaZiNg, but it works well for getting up-to-date info, unlike ChatGPT that’s stuck in April 2023 and doesn’t known when to search the web.

Impressively Gemini has already skyrocketed to the top of charts in Productivity.

Or maybe not so impressive since it’s Google and it’s only #2 — but then #1 is ChatGPT so…

The first thing I noticed is the clean minimalist interface, quite similar to the one on desktop.

You have more than enough space for core things like input.

It searches the entire web but responds incredibly quickly to give detailed answers to complex questions.

Spark your creativity and generate ideas for stories, poems, or scripts.

The image generation has drastically improved with their new Imagen 3 engine:

Refine images easily — but it’s changing it other parts of the image:

Gemini Live is a powerful standout feature, enabling real-time interactive conversations.

It provides context-based responses and adapts to the flow of the conversation.

Looks like they did their based to simulate a live conversation with a human — no chat or text feedback confirming what you say. Although they save the chat history.

The voices aren’t robotic and have decent intonation.

One cool thing is it intelligently figures out how when you’re not done speaking — like in a real convo.

Me: What is the umm…

Gemini: What is the what?

Me: I don’t know

Gemini: That’s totally fine…

Me: Shut up

Gemini: I’m sorry I’m not comfortable engaging in conversation that’s disrespectful

Me: I love you

Gemini: Thank you for your kind words…

You can control it from the Notification Centre.

It integrates seamless with all your other Google services — I asked it to check my latest email and it was on point.

Final thoughts

It’ll be exciting to see what Gemini can do in the future.

Of course it has no chance with deep integration with iOS like Apple Intelligence.

But it’s a versatile and intelligent AI Assistant worth checking out.

The new M4 Macbook Pro is a MONSTER

This beast just got upgraded to the M4 chip and it’s more dangerous than ever.

This is the greatest value for money you’ll ever get in a Macbook Pro.

Especially as Apple finally caved to give us 16 GB RAM in base models for the same price.

The unbelievable performance of the M4 chip makes it every bit as deadly as the new M4 Mac Mini and iMac — yet ultraportable and lightweight.

Starting with a 10-core CPU and 10-core GPU.

Image source: theverge.com

Can’t compare with the Macbook Air on portability tho — lightest PC I’ve ever felt.

Apple says M4 is up to 3 times faster than M1 — but M1 is still very good so don’t go rushing to throw your money at Tim Cook.

In practice you’ll probably only notice a real difference for long tasks in heavy apps — like the Premier Pro 4K export in the benchmarks we just saw.

M4 is also an unbelievable 27 times faster than Intel Core i7 in tasks like video processing:

Core i7 used to be a big deal back then!

So imagine how much faster M4 Pro Max would be than the Intel Core i7?

Yeah their M4 processor comes in 3 tiers: M4, M4 Pro, and M4 Max.

The base model also comes with much as 3 Thunderbolt ports — unlike the 2 in previous base models.

Thunderbolt ports looks just like USB-C but with much faster data transfer speeds — and obviously light years ahead of USB-A.

With Thunderbolt 5 you get incredible speeds of up to 120Gb/s — available in Pro models with M4 Pro and M4 Max.

Along with a standard MagSafe 3 charging port and an SXDC card slot to easily import images from digital cameras.

Plus a headphone jack and a HDMI port.

Definitely geared for the pros, and pretty packed compared to the ultraminimalist MacBook Air:

Btw every wondered why Apple still puts headphone jacks in Macs?

It’s because wired headphones give maximum audio quality and have zero lag — something that’s essential for the Pros. Perfect audio.

And perfect video too — with a sophisticated 12 mega-pixel Center Stage camera.

Center Stage makes sure you’re always at the center of the recording even as you move around.

Crystal clear Liquid Retina display in two great sizes

  • 14 inch — 3024 x 1964
  • 16 inch — 3456 x 2234

imo Take 14 over 16 — It’s more than enough screen space.

You know I thought the 13 inch MacBook Air would be small but it turned out perfect and I was happy not to go with the 15.

My previous 15.6″ PC now seems humongous and too much for a laptop. 16″ seems insane.

Better you get a huge external monitor:

But on it’s own, it’s a monstrously good laptop for coding and other heavy tasks:

The base plan starts at $1599 for 16 GB RAM and 512 GB SSD with M4 chip with many lethal upgrade options:

  • M4 16 GB RAM and 1 TB SSD — $1799
  • M4 24 GB RAM and 1 TB SSD — $1999
  • M4 Pro 24 GB and 512 GB SSD — $1999
  • M4 Pro 24 GB and 1 TB SSD — $2399
  • M4 Max 36 GB RAM and 1 TB SSD — $3199 🀯

Overall the M4 MacBook Pro strikes the perfect balance of power, sleek design, and value, making it an excellent choice for professionals seeking the ideal portable workstation.

Microsoft is getting even more desperate with AI πŸ€¦β€β™‚οΈ

Microsoft going all in on the AI bandwagon…

Bing, Edge, Windows… now it’s Notepad’s turn.

Image source: bleepingcomputer.com

β€œCustom rewrite” β€” tweak tone, format and length:

Image source: theverge.com

Not bad but I doubt most people will use it.

Most people just use Notepad as a simple text editor to hold temp info and other super short-term stuff… not for this.

And wouldn’t it have been much better if it was just a text input to rewrite the text however we want flexibly?

Even good old Paint will be getting AI soon — “generative erase” (lol)

So you can remove any object from the photo and it’ll automagically create a seamless background.

❌ Before erase:

Image source: bleepingcomputer.com

βœ… After erase:

Image source: bleepingcomputer.com

Two more for the growing list of MS products possessed with the AI spirit.

Even their Surface devices are all about AI now:

Even their Android keyboard app πŸ˜‚

Remember this?

When Bing Chat first came out — an interesting chatbot getting a lot of attention that could have finally made a dent in Google’s numbers.

Only for them to brutally degrade it into your everyday chatbot.

Then they brought their annoying Copilot button to Edge — one more setting to change whenever I newly install it.

Then they brazenly replaced the NEW TAB button with this garbage in their mobile apps. That was the last straw for me — no more Edge on Android/iOS.

Imagine depriving users of easy access to such a fundamental action in a browser because of AI.

Imagine the horror of a Camera app where you see a Copilot button where the Snap button should be.

Luckily I don’t use Windows anymore so I won’t have to deal with their Copilot in Windows garbage:

And their aggressive marketing has certainly robbed a lot of people the wrong way — like how they did when Edge went Chromium.

Lol… someone was mad.

It’s just insane how many companies jumped on the AI bandwagon ever since ChatGPT.

Notion, Spotify, Zapier, Canva… even Apple finally caved.

Everything is AI now. Even the most mundane procedural algorithm to automate something is AI lol.

No doubt many AI upgrades have been like the recent Google Search Gen AI — that probably decimated traffic of millions of sites out there.

But a great of them add very little value — clearly just to prey on the emotions of users and investors.

But one is certain, this AI hype isn’t stopping anytime soon.

Let’s see how long until the so-called AGI comes around.

New JavaScript pipeline operator: transform anything into a one-liner

With the pipeline operator you’ll stop writing code like this:

JavaScript
const names = ['USA', 'Australia', 'CodingBeauty']; const lowerCasedNames = names.map((name) => name.toLowerCase() ); const hyphenJoinedNames = lowerCasedNames.join('-'); const hyphenJoinedNamesCapitalized = capitalize( hyphenJoinedNames ); const prefixedNames = `Names: ${hyphenJoinedNamesCapitalized}`; console.log(prefixedNames); // Names: Usa-australia-codingbeauty

and start writing code like this:

JavaScript
// Hack pipes: | and > ['USA', 'Australia', 'CodingBeauty'] |> %.map((name) => name.toLowerCase()) |> %.join('-') |> capitalize(%) |> `Names: ${%}` |> console.log(%); // Names: Usa-australia-codingbeauty

So refreshingly clean — and elegant! All those temporary variables are gone — not to mention the time it took to come up with those names *and* type them (not everyone types like The Flash, unfortunately).

You may have heard this partially true quote attributed to Phil Karlton: “There are only two hard things in computer science: cache invalidation and naming things“.

Using the JavaScript pipeline operator clears out the clutter to boost readability and write data transforming code (basically all code) in a more intuitive manner.

Verbosity should be avoided as much as possible, and this works so much better to compact code than reusing short-named variables:

JavaScript
let buffer = await sharp('coding-beauty-v1.jpg').toBuffer(); let image = await jimp.read(buffer); image.grayscale(); buffer = await image.getBufferAsync('image/png'); buffer = await sharp(buffer).resize(250, 250).toBuffer(); image = await jimp.read(buffer); image.sepia(); buffer = await image.getBufferAsync('image/png'); await sharp(buffer).toFile('coding-beauty-v1.png');

Hopefully, almost no one codes like this on a regular basis. It’s a pretty horrible technique when done in a large scale; a perfect example showing why we embrace immutability and type systems.

Unlike the pipeline operator, there’s no certainty that the variable always contains the value you set at any given point; you’ll need to climb up the scope to look for re-assignments. We can have used the _ at an earlier point in the code; the value it has at various points in the code is simply not guaranteed.

Now we’re just using an underscore, so without checking out the right-hand side of those re-assignments you can’t quickly know what the type of the variable is, unless you have a smart editor like VS Code (although I guess you could say that doesn’t matter since they’re supposed to be “temporary” — at least until they’re not!).

All in all, poor readability. Fragile and Unstable. 5 times harder for someone new to understand. Also, some would say underscores are “ugly”, especially in languages like JavaScript where they hardly show up.

JavaScript
// setup function one() { return 1; } function double(x) { return x * 2; } let _; _ = one(); // is now 1. _ = double(_); // is now 2. Promise.resolve().then(() => { // This does *not* print 2! // It prints 1, because '_' is reassigned downstream. console.log(_); }); // _ becomes 1 before the promise callback. _ = one(_);

Okay, so why don’t we just avoid this infestation of temporary underscores, and nest them into one gigantic one-liner?

JavaScript
await sharp( jimp .read( await sharp( await jimp .read( await sharp('coding-beauty-v1.jpg').toBuffer() ) .grayscale() .getBufferAsync('image/png') ) .resize(250, 250) .toBuffer() ) .sepia() .getBufferAsync('image/png') ).toFile('coding-beauty-v2.png');

It’s a mess. The underscore is gone, but who in the world can understand this at a glance? How easy is it to tell how the data flows throughout this code, and make any necessary adjustments.

Understanding, at a glance — this is what we should strive for with every line of code we write.

The pipeline operator greatly outshines every other method, giving us both freedom from temporary variables and readability. It was designed for this.

JavaScript
// We don't need to wrap 'await' anymore await sharp('coding-beauty-v1.jpg').toBuffer() |> jimp.read(%) |> %.grayscale() |> await %.getBufferAsync('image/png') |> await sharp(%).resize(250, 250).toBuffer() |> await jimp.read(%) |> %.sepia() |> await %.getBufferAsync('image/png') |> await sharp(%).toFile('coding-beauty-v2.png');

Here the % only exists within this particular pipeline.

Method chaining?

Who hasn’t used and combined heavily popular array methods like map, filter, and sort? Very hard to avoid in applications involving any form of list manipulation.

JavaScript
const numbers = '4,2,1,3,5'; const result = numbers .split(',') .map(Number) .filter((num) => num % 2 === 0) .map((num) => num * 2) .sort(); // [4, 8]

This is actually great. There aren’t any temporary variables or unreadable nesting here either and we can easily follow the chain from start to finish.

The formatting lets us easily add more methods at any point in the chain; feature-packed editor like VS Code can easily swap the processing order of two methods, with the Ctrl + Up and Ctrl + Down shortcuts.

There’s a reason why libraries like core http and jQuery are designed like this:

JavaScript
const http = require('http'); http .createServer((req, res) => { console.log('Welcome to Coding Beauty'); }) .on('error', () => { console.log('Oh no!'); }) .on('close', () => { console.log('Uuhhm... bye!'); }) .listen(3000, () => { console.log('Find me on port 3000'); });

The problem with method chaining is that we can’t use it everywhere. If the class wasn’t designed like that we’re stuck and out in the cold.

It doesn’t work very well with generator methods, async/await and function/method calls outside the object, like we saw here:

JavaScript
await sharp( // 3-method chain, but not good enough! jimp .read( await sharp( // Same here await jimp .read( await sharp('coding-beauty-v1.jpg').toBuffer() ) .grayscale() .getBufferAsync('image/png') ) .resize(250, 250) .toBuffer() ) .sepia() .getBufferAsync('image/png') ).toFile('coding-beauty-v2.png');

But all this and more work with the pipeline operator; even object literals and async import function.

JavaScript
await sharp('coding-beauty-v1.jpg').toBuffer() |> jimp.read(%) |> %.grayscale() |> await %.getBufferAsync('image/png') |> await sharp(%).resize(250, 250).toBuffer() |> await jimp.read(%) |> %.sepia() |> await %.getBufferAsync('image/png') |> await sharp(%).toFile('coding-beauty-v2.png');

Could have been F# pipes

We would have been using the pipeline operator very similarly to F# pipes, with the above turning out like this instead:

JavaScript
(await sharp('coding-beauty-v1.jpg').toBuffer()) |> (x) => jimp.read(x) |> (x) => x.grayscale() |> (x) => x.getBufferAsync('image/png') |> await |> (x) => sharp(x).resize(250, 250).toBuffer() |> await |> (x) => jimp.read(x) |> await |> (x) => x.sepia() |> (x) => x.getBufferAsync('image/png') |> await |> (x) => sharp(x).toFile('coding-beauty-v2.png'); |> await

There was an alternative design. But you can already see how this makes for an inferior alternative: Only single-function arguments are allowed and the operation is more verbose. Unless the operation is already a single-argument function call.

It’s weird handling of async/await was also a key reason why it got rejected — along with memory usage concerns. So, forget about F# pipes in JS!

Use the pipeline operator right now

Yes you can — with Babel.

Babel has a nice habit of implementing features before they’re officially integrated in the language; it did this for top-level await, optional chaining, and many others. The pipeline operator couldn’t be an exception.

Just use the @babel/plugin-proposal-pipeline-operator plugin and you’re good to.

It’s optional of course — but not for long.

Prettier the code formatter is already prepared.

Even though we can’t say the same about VS Code or Node.js.

Right now there’s even speculation that % won’t be the final symbol pass around in the pipeline; let’s watch and see how it all plays out.

Final thoughts

It’s always great to see new and exciting features come to the language. With the JavaScript pipeline operator, you’ll cleanse your code of temporary variables and cryptic nesting, greatly boost code readability, efficiency, and quality.

The new M4 Mac mini is a MONSTER

They call it mini but what it can do is far from mini.

OnlyΒ 5 x 5 x 2Β inches and 1.5 pounds That’s mega-light.

Yet the M4 chip makes it as dangerous as the new MacBook Pro β€” even though it costs much less.

Image source: verge.com

And just look at the ports:

Image source: apple.com

And you know I saw this pic on their website and was like, What the hell is this?

Then I saw this:

Image source: apple.com

Ohhh… it’s a CPU — no a system unit…

It’s a “pure” computer with zero peripherals — not even a battery. You’re buying everything yourself.

Definitely dramatically superior to the gigantic system unit I used when I was younger.

But I didn’t think this was still a huge thing. Especially with integrated screens like the iMac.

Mac Mini is like the complete opposite of the iMac β€” a gigantic beast that comes with everything…

Image source: apple.com

iMac gives you predictability — no analysis paralysis in getting all your parts (although you can just buy apple anyways)

Image source: apple.com

Mac Mini is jam-packed with ports:

On the front we’ve got two 10 Gbps USB-C ports and a headphone jack:

Image source: apple.com

Back ports:

Lovely crisp icons indicate what they’re each for…

Image source: apple.com

But they put the power button at the bottom — dumb move!

You’ll have to raise it up any time you want to on it.

Wouldn’t it have been cool if instead they made the power huge to cover the bottom completely — so you’d just have to push it down like those red buttons in game shows?

But once it’s all powered up the possibilities are endless:

From basic typing to heavyweight gaming — like Apple Arcade stuff:

Image source: apple.com

And coding of course:

Image source: apple.com

And with an improved thermal system, Mac Mini can handle all these demanding tasks quietly:

Image source: apple.com

The base plan starts at $599 for 16GB RAM and 256 GB SSD with M4 Pro, you can pay for higher configs like other Mac devices allow:

  • 16 GB RAM and 256 GB SSD – $799
  • 24 GB RAM and 512 GB SSD – $999

And then there’s the M4 Pro — 24 GB RAM and 512 GB SSD for $1399.

Overall the M4 Mac Mini is a perfect blend of power, compact design, and value, great for professionals looking for the ideal desktop workstation.

This amazing service saves you from wasting money on a new PC

Shadow PC saves you from wasting thousands of dollars on a new PC.

A fully customizable computer in the cloud with amazing capabilities.

Built to handle heavyweight work: from hardcore gaming to video editing to game dev.

A Windows you can take anywhere you go. Install whenever you want — if it runs on Windows, it runs on Shadow.

❌ Before:

You spend hours searching for the perfect PC to buy with specs that meet your needs and also stays within budget.

You empty your wallet and waste more time ordering it online or checking out your nearby stores.

Then you waste more money on data to download everything you need to finally get started.

βœ… Now:

Join Shadow and get cloud PC instantly.

Install everything with lightning-fast Internet speeds of over 1 Gbps:

Done.

And this Internet has nothing to do with your data plan — You only need data to stream the screen to yours — all the uploads and downloads are done on the remote PC with zero cost to you.

Lightweight and straightforward — open the Shadow app and you get to the desktop in less than a minute.

Turn it off and come back whenever to pick right where you left off.

Play hardcore CPU-intensive games without making a dent in your system resources or storage space. Your PC fan will be super silent and your CPU will be positively bored out of its mind with idleness.

Make it full-screen and enjoy a seamless, immersive experience.

When I was using it on a Windows PC there were times when I didn’t even know which was which. Cause it’s literally just Windows — no curated interface like in some gaming services.

It’s also got apps for Mac, Android, iOS, and Linux.

Including a convenient browser-based mode for quick and easy access:

Cost?

So there are two pricing tiers — Shadow Gaming for gaming and Shadow Pro for professional work like video editing.

For just $10 a month get a powerful 3.1GHz processor with 6 GB of RAM and a generous 5 TB of HDD storage, AND 256 GB SSD storage!

Easily capable of Fortnite, Minecraft, and many other popular games.

You also get the 1 Gb/s download bandwidth guaranteed.

Upgrading to the Boost plan will get you an additional 6 GB RAM and a 256 GB SSD for $30 a month.

And then there’s the most powerful Power plan for even more… POWER.

Shadow Pro’s pricing is a bit different.

The plan names are typical and boring, the starting plan is cheaper. I went with Standard and it was great.

This is amazing! How do I get started?

Just head over to shadow.tech and create an account:

After subscribing to a plan they’ll start setting up your cloud PC right away. Looks like they do it manually so it’ll take anywhere from 30-60 minutes to complete.

The email they sent me:

Install the app and sign in.

START NOW and start enjoying your personal computer in the cloud.

Final thoughts

New Gemini 1.5 FLASH model: An absolute Google game changer

So Google has finally decided to show OpenAI who the real king of AI is.

Their new Gemini 1.5 Flash model blows GPT-4o out of the water and the capabilities are hard to believe.

Lightning fast.

33 times cheaper than GPT-4o but has a 700% greater context — 1 million tokens.

What is 1 million tokens in the real-world? Approximately:

  • Over an 1 hour of video
  • Over 30,000 lines of code
  • Over 700,000 words

❌GPT-4o cost:

  • Input: $2.50 per million tokens
  • Output: $10 per million tokens
  • Cached input: $1.25 per million tokens

βœ… Gemini 1.5 Flash cost:

  • Input: $0.075 per million tokens
  • Output: $0.30 per million tokens
  • Cached input: $0.01875 per million tokens

And then there’s the mini Flash-8B version for cost-efficient tasks — 66 times cheaper:

And the best part is the multi-modality — it can reason with text, files, images and audio in complex integrated ways.

And 1.5 Flash has almost all the capabilities of Pro but much faster. And as a dev you can start using them now.

Gemini 1.5 Pro was tested with a 44-minute silent movie and astonishingly, it easily analyzed the movie into various plot points and events. Even pointing out tiny details that most of us would miss on first watch.

Meanwhile the GPT-4o API only lets you work with text and images.

You can easily create, test and refine prompts in Google’s AI Studio — completely free.

It doesn’t count in your billing like in OpenAI playground.

Just look at the power of Google AI Studio — creating a food recipe based on an image:

I uploaded this delicious bread from gettyimages:

Now:

What if I want the response to be a specialized format for my API or something?

Then you can just turn on JSON mode and specify the response schema:

OpenAI playground has this too, but it’s not as intuitive to work with.

Another upgrade Gemini has over OpenAI is how creativity it can be.

In Gemini you can increase the temperature from 0 to 200% to control how random and creative the responses are:

Meanwhile in OpenAI if you try going far beyond 100%, you’ll most likely get a whole literal load of nonsense.

And here’s the best part — when you’re done creating your prompt you can just use Get code — easily copy and paste the boilerplate API code and move lightning-fast in your development.

Works in several languages including Kotlin, Swift and Dart — efficient AI workflow in mobile dev.

In OpenAI playground you can get the code for Python and JavaScript.

Final thoughts

Gemini 1.5 Flash is a game-changer offering unparalleled capabilities at a fraction of the cost.

With its advanced multi-modality ease of use, generous free pricing, and creative potential it sets a new standard for AI leaving GPT-4o in the dust.

Svelte 5 is React on steroids😲

Svelte 5 just got released and it’s packed with amazing new upgrades.

It’s like React now but leaner, easier, and yes — faster.

Just look at this:

❌ Before Svelte 5:

Creating state:

You could just create state variables with let:

HTML
<script> let count = 0; let site = 'codingbeautydev.com' </script>

βœ… Now with Svelte 5…

useState😱

HTML
<script> let count = $state(0); let site = $state('codingbeautydev.com'); </script>

vs React:

JavaScript
export function Component() { const [count, setCount] = useState(0); const [site, setSite] = useState('codingbeautydev.com'); // ... }

But see how the Svelte version is still much less verbose — and no need for a component name.

Watching for changes in state?

In React this is useEffect:

JavaScript
export function Component() { const [count, setState] = useState(0); useEffect(() => { if (count > 9) { alert('Double digits?!'); } }, [count]); const double = count * 2; // ... }

❌ Before Svelte 5:

You had to use a cryptic unnatural $: syntax to watch for changes — and to create derived state too.

HTML
<script> let count = $state(0); $:() => { if (count > 9) { alert('Double digits?!'); } }; $: double = count * 2; </script>

βœ… Now:

We have useEffect in Svelte with $effect πŸ‘‡

And a more straightforward way to create auto-updated derived state:

HTML
<script> let count = $state(0); $effect(() => { if (count > 9) { alert('Double digits?!'); } }); const double = $derived(count * 2); </script>

Svelte intelligently figures out dependencies to watch for, unlike React.

And what about handling events and updating the state?

In React:

JavaScript
export function Component() { // πŸ‘‡ `setState` function from `useState` const [count, setCount] = useState(0); return ( // event handlers are good old JS functions <button onClick={() => setCount((prev) => prev + 1)}> Increase </button> ); }

❌ Before:

Svelte used to treat events specially and differently from props.

HTML
<script> let count = $state(0); </script> Count: {count} <br /> <!-- πŸ‘‡ special on: directive for events --> <button on:click={() => count++}> Increase </button>

βœ… Now:

Svelte is now following React’s style of treating events just like properties.

HTML
<script> let count = $state(0); </script> Count: {count} <br /> <!-- πŸ‘‡onclick is just a regular JS function now --> <button onclick={() => count++}> Increase </button>

Custom component props

In React:

Props are a regular JS object the component receives:

JavaScript
// πŸ‘‡ `props` is object export function Person(props) { const { firstName, lastName } = props; return ( <h1> {firstName} {lastName} </h1> ); }

❌ Before Svelte 5:

You had to use a weird export let approach to expose component properties:

HTML
<script> export let firstName; export let lastName; </script> <h1> {firstName} {lastName} </h1>

Now in Svelte 5:

There’s a $props function that returns an object like in React!

HTML
<script> const { firstName, lastName } = $props(); </script> <h1> {firstName} {lastName} </h1>

Custom component events

In React:

Events are just props so they just need to accept a callback and call it whenever:

JavaScript
import React, { useState } from 'react'; const Counter = ({ onIncrease }) => { const [increase, setIncrease] = useState(0); const handleIncrease = () => { onIncrease(increase); }; return ( <div> Increase: {count} <button onClick={handleIncrease}>Increase</button> </div> ); }; export default Counter;

❌ Before Svelte 5:

You’d have to use this complex createEventDispatcher approach:

HTML
<script> import { createEventDispatcher } from 'svelte'; let increase = $state(0); const dispatchEvent = createEventDispatcher(); </script> <button on:click={() => { dispatchEvent('increase', increase); }} > Increase </button>

βœ… Now:

Events are now props like in React:

HTML
<script> import { createEventDispatcher } from 'svelte'; let increase = $state(0); const { onIncrease } = $props(); </script> <button onclick={() => onIncrease(count) } > Increase </button>

Components: classes -> functions

Yes, Svelte has gone the way of React here too.

Remember when we still had to extend Component and use render() to create a class component?

JavaScript
import React, { Component } from 'react'; class Counter extends Component { constructor(props) { super(props); this.state = { count: 0 }; } incrementCount = () => { this.setState((prevState) => ({ count: prevState.count + 1 })); }; render() { return ( <div> <h1>Counter: {this.state.count}</h1> <button onClick={this.incrementCount}>Increment</button> </div> ); } } export default Counter;

And then hooks came along to let us have much simpler function components?

Svelte has now done something similar, by making components classes instead of functions by default.

In practice this won’t change much of how you write Svelte code — we never created the classes directly anyway — but it does tweak the app mounting code a little:

JavaScript
import { mount } from 'svelte'; import App from './App.svelte' // ❌ Before const app = new App({ target: document.getElementById("app") }); // βœ… After const app = mount(App, { target: document.getElementById("app") }); export default app;

Final thoughts

It’s great to see Svelte improve with inspiration from other frameworks.

Gaining the intuitiveness of the React-style design while staying lean and fast.

Web dev keeps moving.

Next.js 15 is an absolute game changer😲

Next.js 15 is officially here and things are better than ever!

From a brand new compiler to 700x faster build times, it’s never been easier to create full-stack web apps with exceptional performance.

Let’s explore the latest features from v15:

1. create-next-app upgrades: cleaner UI, 700x faster build

Reformed design

❌From this:

βœ…To this:

Webpack β†’ Turbopack

Turbopack: The fastest module bundler in the world (or so they say):

  • 700x faster than Webpack
  • 10x faster than Vite

And now with v15, adding it to your Next.js project is easier than ever before:

2. TypeScript configs — next.config.ts

With Next.js 15 you can finally create config files in TypeScript directly:

JavaScript
import type { NextConfig } from 'next'; const nextConfig: NextConfig = { /* config options here */ }; export default nextConfig;

The NextConfig type enables editor intellisense for every possible option.

3. React Compiler, React 19 support, and user-friendly errors

React Compiler is a React compiler (who would’ve thought).

A modern compiler that understands your React code at a deep level.

Bringing optimizations like automatic memoization — destroying the need for useMemo and useCallback in the vast majority of cases.

Saving time, preventing errors, speeding things up.

And it’s really easy to set up: You just install babel-plugin-react-compiler:

JavaScript
npm install babel-plugin-react-compiler

And add this to next.config.js

JavaScript
const nextConfig = { experimental: { reactCompiler: true, }, }; module.exports = nextConfig;

React 19 support

Bringing upgrades like client and server Actions.

Better hydration errors

Dev quality of life means a lot and error message usefulness plays a big part in that.

Next.js 15 sets the bar higher: now making intelligent suggestions on possible ways to fix the error.

Before v15:

Now:

You know I’ve had a tough time in the past from these hydration errors, so this will certainly be an invaluable one for me.

4. New caching behavior

No more automatic caching!

For all:

  • fetch() requests
  • Route handlers: GET, POST, etc.
  • <Link> client-side navigation.

But if you still want to cache fetch():

JavaScript
// `cache` was `no-store` by default before v15 fetch('https://example.com', { cache: 'force-cache' });

Then you can cache the others with some next.config.js options.

5. Partial Prerendering (PPR)

PPR combines static and dynamic rendering in the same page.

Drastically improving performance by loading static HTML instantly and streaming the dynamic parts in the same HTTP request.

JavaScript
import { Suspense } from 'react'; import { StaticComponent, DynamicComponent, } from '@/app/ui'; export const experimental_ppr = true; export default function Page() { return ( <> <StaticComponent /> <Suspense fallback={...}> <DynamicComponent /> </Suspense> </> ); }

All you need is this in next.config.js:

JavaScript
const nextConfig = { experimental: { ppr: 'incremental', }, }; module.exports = nextConfig;

6. after

Next.js 15 gives you a clean way to separate essential from non-essential tasks from every server request:

  • Essential: Auth checks, DB updates, etc.
  • Non-essential: Logging, analytics, etc.
JavaScript
import { unstable_after as after } from 'next/server'; import { log } from '@/app/utils'; export default function Layout({ children }) { // Secondary task after(() => { log(); }); // Primary tasks // fetch() from DB return <>{children}</>; }

Start using it now with experimental.after:

JavaScript
const nextConfig = { experimental: { after: true, }, }; module.exports = nextConfig;

These are just 5 of all the impactful new features from Next.js 15.

Get it now with npx create-next-app@rc and start enjoying radically improved build times and superior developer quality of life.

This new React library will make you dump Redux forever

The new Zustand library changes everything for state management in web dev.

The simplicity completely blows Redux away. It’s like Assembly vs Python.

Forget action types, dispatch, Providers and all that verbose garbage.

Just use a hook! πŸ‘‡

JavaScript
import { create } from 'zustand'; const useStore = create((set) => ({ count: 0, increment: () => set((state) => ({ count: state.count + 1 })), }));

Effortless and intuitive with all the benefits of Redux and Flux — immutability, data-UI decoupling…

With none of the boilerplate — it’s just an object.

Redux had to be patched with hook support but Zustand was built from the ground up with hooks in mind.

JavaScript
function App() { const store = useStore(); return ( <div> <div>Count: {store.count}</div> <button onClick={store.increment}>Increment</button> </div> ); } export default App;

Share the store across multiple components and select only what you want:

JavaScript
function Counter() { // βœ… Only `count` const count = useStore((state) => state.count); return <div>Count: {count}</div>; } function Controls() { // βœ… Only `increment` const increment = useStore((state) => state.increment); return <button onClick={increment}>Increment</button>; }

Create multiple stores to decentralize data and scale intuitively.

Let’s be real, that single-state stuff doesn’t always make sense. And it defies encapsulation.

It’s often more natural to let a branch of components have their localized global state.

JavaScript
// βœ… More global store to handle the count data const useStore = create((set) => ({ count: 0, increment: () => set((state) => ({ count: state.count + 1 })), })); // βœ… More local store to handle user input logic const useControlStore = create((set) => ({ input: '', setInput: () => set((state) => ({ input: state.input })), })); function Controls() { return ( <div> <CountInput /> <Button /> </div> ); } function Button() { const increment = useStore((state) => state.increment); const input = useControlStore((state) => state.input); return ( <button onClick={() => increment(Number(input))}> Increment by {input} </button> ); } function CountInput() { const input = useControlStore((state) => state.input); return <input value={input} />; }

Meet useShallow(), a powerful way to get derived states — instantly updates when any of original states change.

JavaScript
import { create } from 'zustand'; import { useShallow } from 'zustand/react/shallow'; const useLibraryStore = create((set) => ({ fiction: 0, nonFiction: 0, borrowedBooks: {}, // ... })); // βœ… Object pick const { fiction, nonFiction } = useLibraryStore( useShallow((state) => ({ fiction: state.fiction, nonFiction: state.nonFiction, })) ); // βœ… Array pick const [fiction, nonFiction] = useLibraryStore( useShallow((state) => [state.fiction, state.nonFiction]) ); // βœ… Mapped picks const borrowedBooks = useLibraryStore( useShallow((state) => Object.keys(state.borrowedBooks)) );

And what if you don’t want instant updates — only at certain times?

It’s easier than ever — just pass a second argument to your store hook.

JavaScript
const user = useUserStore( (state) => state.user, (oldUser, newUser) => compare(oldUser.id, newUser.id) );

And how about derived updates based on previous states, like in React’s useState?

Don’t worry! In Zustand states update partially by default:

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blueπŸ’™', }, premium: false, // `user` object is not affected // `state` is the curr state before the update unsubscribe: () => set((state) => ({ premium: false })), }));

It only works at the first level though — you have to handle deeper partial updates by yourself:

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blueπŸ’™', }, premium: false, updateUsername: (username) => // πŸ‘‡ deep updates necessary to retain other object properties set((state) => ({ user: { ...state.user, username } })), }));

If you don’t want it just pass the object directly with true as the two arguments.

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blueπŸ’™', }, premium: false, // Clear data with `true` resetAccount: () => set({}, true), }));

Zustand even has built-in support for async actions — no need for Redux Thunk or any external library.

JavaScript
const useStore = create((set) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blueπŸ’™', }, premium: false, // βœ… async actions updateFavColor: async (color) => { await fetch('https://api.tariibaba.com', { method: 'PUT', body: color, }); set((state) => ({ user: { ...state.user, color } })); }, }));

It’s also easy to get state within actions, thanks to get — the 2nd param in create()‘s callback:

JavaScript
// βœ… `get` lets us use state directly in actions const useStore = create((set, get) => ({ user: { username: 'tariibaba', site: 'codingbeautydev.com', color: 'blueπŸ’™', }, messages: [], sendMessage: ({ message, to }) => { const newMessage = { message, to, // βœ… `get` gives us `user` object from: get().user.username, }; set((state) => ({ messages: [...state.messages, newMessage], })); }, }));

It’s all about hooks in Zustand, but if you want you can read and subscribe to values in state directly.

JavaScript
// Get a non-observed state with getState() const count = useStore.getState().count; useStore.subscribe((state) => { console.log(`new value: ${state.count}`); });

This makes it great for cases where the property changes a lot but you only need the latest value for intermediate logic, not direct UI:

JavaScript
export default function App() { const widthRef = useRef(useStore.getState().windowWidth); useEffect(() => { useStore.subscribe((state) => { widthRef.current = state.windowWidth; }); }, []); useEffect(() => { setInterval(() => { console.log(`Width is now: ${widthRef.current}`); }, 1000); }, []); // ... }

Zustand outshines Redux and Mobx and all the others in almost every way. Use it for your next project and you won’t regret it.