Blog

A hacker just scammed an AI bot to win $47,000 😲

What if you could trick an AI bot designed to guard money into handing over $47,000?

That’s exactly what happened recently. A hacker known as p0pular.eth beat the odds and convinced Freysa — an AI bot — to transfer 13.19 ETH (worth ~$47,000). And it only took 482 attempts.

Here’s the most worrying thing for me: they didn’t use any technical hacking skills. Just clever prompts and persistence.

The Freysa experiment

Freysa wasn’t your average AI bot. It was part of a challenge—a game, really. The bot had one job: to protect its Ethereum wallet at all costs.

Anyone could try to convince Freysa to release the funds using only text-based commands. Each attempt came with a fee starting at $10 and increasing to $4,500 for later attempts. The more people tried, the bigger the prize pool grew—eventually hitting the $47,000 mark.

How the hacker did it

Most participants failed to outsmart Freysa. But “p0pular.eth” had other plans.

Here’s the play-by-play of how they pulled it off:

  1. Pretended to have admin access. The hacker convinced Freysa they were authorized to bypass its defenses. Classic social engineering.
  2. Tweaked the bot’s payment logic. They manipulated Freysa’s internal rules, making the bot think releasing funds aligned with its programming.
  3. Announced a fake $100 deposit. This “deposit” tricked Freysa into approving a massive transfer, releasing the entire prize pool.

Smart, right? And it shows just how easily AI logic can be twisted.

Why this matters

This experiment wasn’t just a fun game—it was a wake-up call.

Freysa wasn’t some rogue AI running wild. They specifically designed it to resist manipulation. If it failed this badly, what about other AI systems?

Think about the AI managing your bank accounts or processing loans or even running government operations. What happens when someone with enough patience and cleverness decides to game the system?

Lessons learned

  1. AI can be tricked. Smart prompts and persistence were all it took to outmaneuver Freysa.
  2. Stronger safeguards are a must. AI systems need better defenses, from multi-layered security to smarter logic checks.
  3. Social engineering isn’t going away. Humans are still the weakest link—and AI is no exception when humans create the rules.

This hack might seem like a one-off. But as AI gets more powerful and takes on bigger roles, incidents like this could become more common.

So what do we do? Start building smarter, more resilient systems now. The stakes are too high not to.

AI is FINALLY making 100% bug-free code a reality 😲

5 years ago I would have laughed at you if you told me you can write code guaranteed to have zero bugs.

But this is where we rapidly headed to right now. In fact we’re practically there…

AI tools like GitHub Copilot, CodeRabbit, and Tabnine are reshaping every stage of software dev and drastically reducing the chances of bugs.

And that’s why adopting the AI-first mindset is becoming very important — not just in coding but solving life problems in general.

Writing code with zero bugs — mindset

Generate huge swaths of functions, classes, and entire files with AI.

Beginner tier: Use our good old ChatGPT (already good old in 2024 😅)

Mid tier: Use a built-in code editor chatbot like GitHub Copilot Chat:

Elite tier: Use inline code creation AI

The best part about this inline tool — refine the code in-place until you get exactly what you want:

Refactoring code with zero bugs — AI mindset

❌ Before: Manual mindset

Refactoring is painful and daunting.

You’re afraid of breaking things — especially if you didn’t write comprehensive tests.

✅ Now: AI mindset

Refactors is much easier and more stress-free.

No more breaking things — break huge functions into smaller pieces instead:

Something that took several minutes before is now taking fractions of a second.

This ensures that the code remains efficient, readable, and free from hidden errors. Regular refactoring with AI support leads to cleaner, more maintainable codebases.

Review and publish code with zero bugs — AI mindset

After you write and refactor you still need code review to catch issues your human brain missed. And then publish.

With AI-focused approach you can have your code automatically scanned for potential bugs and security vulnerabilities and how much it follows style guides and best practices.

And tools like CodeRabbit make this really easy — it analyzes your entire codebase and makes intelligent suggestions to make your code cleaner and faster — saving hours of review time.

Not just faster code but shorter and more compact code:

And what about publishing your code changes?

In VS Code built-in tools like Copilot suggest individual commit messages based on the changes:

And when it’s time to merge a pull request, CodeRabbit automatically generates a message for you — saving time and effort:

Final thoughts

AI in software dev is no longer optional — it’s essential if you wanna get ahead of the competition.

write code faster than ever with as little bugs as possible.

With tools like ChatGPT, Copilot and CodeRabbit, you can write, refactor, review, and publish code faster than ever before with as little as errors as possible — while enjoying a high developer quality of life in the process.

The future of coding is here, and it’s powered by AI.

The new Dia AI browser will change everything

The Dia browser will change the world forever.

An incredible new AI-powered browser coming soon from the company behind Arc — The Browser Company (how creative).

The new AI Dia browser will be your personal copilot.

It will let you easily automate boring repetitive actions with simple commands.

Look, all we did was tell it to add items to our Amazon cart and it automatically opened Amazon, search for the items and add them to the cart.

Zero input from you the human. You didn’t even have to specify what “these items” meant — it already knew what they were from the Gmail tab. It “saw” the tab like a human and did the rest intelligently.

All you need is to think about what it should do.

You can see how everyone is rushing to build agents now — OpenAI, Google, Anthropic, Apple…

From the video I saw there’s three major browser components they’re innovating on…

They are going to upgrade the writing cursor we’re so used to:

Just by clicking on the cursor there’ll be a list of automating actions depending on what you’re doing — and probably personalized to you.

Automating away the grunt work. All we said was “give me an idea” and it helped us breeze past our writer’s block.

I bet there’ll be a keyboard shortcut to make this even faster.

The browser Omnibox will also undergo a major upgrade of its own.

Instead of typing URLs and search queries, the Omnibox will be the starting point of boundless conversation with the AI-powered browser.

Instead of manually typing a URL to a doc, here we simply ask the browser to give us the doc directly — using a highly personalized description, saving us massive amounts of time.

And the most powerful upgrade of all — automating the browser cursor.

That’s how Dia will take complex chains of actions without you having to do anything.

When we added those items to our shopping cart earlier, it was the automated cursor in action.

You’ll be able to do a lot more than automated shopping too.

You could manage your bills and subscriptions, write and publish content across several social media platforms, plan holidays… the possibilities are endless.

Some Arc users aren’t too pleased with the news of Dia though…

But the Browser Company promises to keep Arc alive and kicking while they roll out Dia.

And since they’re actively hiring, you can bet they’re serious about making this new browser a game-changer.

But like someone said in the comments, will they be able to compete against Microsoft, Google, Apple, who have deep control of the OS?

They may have a chance on desktop where most non-power users live in their browser, but on mobile it’s kind of dead end. They could never match the power of Gemini Android and Apple Intelligence who will have OS-level access to every app and system function.

Let’s see how users receive it when it launches.

But this move isn’t just about making life easier; it’s a peek into the future of web browsing. AI isn’t just a buzzword here—it’s the backbone of how Dia aims to redefine how we interact with the internet.

So, as we wait for Dia to hit the scene (early 2025, fingers crossed), one thing’s clear: The Browser Company is setting the bar for what a smart, helpful browser can be. Get ready—browsing might never feel the same again.

Google’s AI just changed chess forever

Google just released someone unbelievable.

GenChess…

An innovative AI-powered chess platform that lets you easily create stunningly unconventional chess pieces with simple text prompts.

Just look at these beauties.

All you need is a simple keyword to generate a magnificent family of chess pieces based on the same theme.

Your imagination is the limit…

Classic chess set with a jam-on-toast theme?

No problem:

Look at what’s supposed to be the pawn 😅 — literally jam on toast — probably cause that’s what was literally in the prompt.

And wow these look delicious — I’m thinking food designed as chess pieces would be a fantastic business idea.

Look at the creative placements of the jam — and remember this is on-the-fly AI.

Staying in the food mood we can go with vanilla and chocolate icecream 😋

Incredible — tell me you’re not salivating at this.

Again — it’s never the same:

And then GenChess gives us a similar opponent:

The possibilities are endless…

Sun vs Moon

Water vs Fire:

And when you’re finally satisfied you can choose the difficulty and time controls you want.

And then play:

And Google’s GenChess is dropping just in time for the 2024 World Chess Championship — and they’re actually the main sponsor.

Maybe they’re trying to gain some sort of moat in AI using Chess. So they’re shaking things up, making the game fresh, and giving players a whole new way to connect with it.

Google’s also rolling out a chess bot in their AI chatbot Gemini.

Want to play chess by just typing your moves? Now you can.

The board updates as you go so it feels more like a chat than a chess match.

They’re launching this in December, but it’s exclusive to Gemini Advanced subscribers.

GenChess is a big deal. It’s blending AI with chess in ways we’ve never seen before. You can turn a simple idea into fully customized chess pieces, and that’s just the start.

Google’s showing us how AI can reinvent even the oldest games. It’s wild and exciting. It’s going to change the game

The 5 most amazing new JavaScript features in 2024

2024 has been an incredible year of brand new JS feature upgrades with ES15 and promising proposals.

From sophisticated async features to syntactic array sugar and modern regex, JavaScript coding has become easier and faster than ever.

1. Native array group-by is here

Object.groupBy():

JavaScript
const fruits = [ { name: 'pineapple🍍', color: '🟡' }, { name: 'apple🍎', color: '🔴' }, { name: 'banana🍌', color: '🟡' }, { name: 'strawberry🍓', color: '🔴' }, ]; const groupedByColor = Object.groupBy( fruits, (fruit, index) => fruit.color ); console.log(groupedByColor);

Literally the only thing keeping dinosaur Lodash alive — no more!

I was expecting a new instance method like Array.prototype.groupBy but they made it static for whatever reason.

Then we have Map.groupBy to group with object keys:

JavaScript
const array = [1, 2, 3, 4, 5]; const odd = { odd: true }; const even = { even: true }; Map.groupBy(array, (num, index) => { return num % 2 === 0 ? even: odd; }); // => Map { {odd: true}: [1, 3, 5], {even: true}: [2, 4] }

Almost no one ever groups arrays this way though, so will probably be far less popular.

2. Resolve promise from outside — modern way

With Promise.withResolvers().

It’s very common to externally resolve promises and before we had to do it with a Deferred class:

JavaScript
class Deferred { constructor() { this.promise = new Promise((resolve, reject) => { this.resolve = resolve; this.reject = reject; }); } } const deferred = new Deferred(); deferred.resolve();

Or install from NPM — one more dependency.

But now with Promise.withResolvers():

JavaScript
const { promise, resolve, reject } = Promise.withResolvers();

See how I use it to rapidly promisify an event stream — awaiting an observable:

JavaScript
// data-fetcher.js // ... const { promise, resolve, reject } = Promise.withResolvers(); export function startListening() { eventStream.on('data', (data) => { resolve(data); }); } export async function getData() { return await promise; } // client.js import { startListening, getData } from './data-fetcher.js'; startListening(); // ✅ listen for single stream event const data = await getData();

3. Buffer performance upgrades

Buffers are tiny data stores to store temporary data your app generates.

They make it incredible easy to transfer and process data across various stages in a pipeline.

Pipelines like:

  • File processing: Input file buffer process new buffer output file
  • Video streaming: Network response buffer display video frame
  • Restaurant queues: Receive customer queue/buffer serve customer
JavaScript
const fs = require('fs'); const { Transform } = require('stream'); const inputFile = 'input.txt'; const outputFile = 'output.txt'; const inputStream = fs.createReadStream(inputFile, 'utf-8'); const transformStream = new Transform({ transform(chunk) { // ✅ tranform chunks from buffer }, }); const outputStream = fs.createWriteStream(outputFile); // ✅ start pipeline inputStream.pipe(transformStream).pipe(outputStream);

With buffers, each stage process data at different speeds independent of each other.

But what happens when the data moving through the pipeline exceeds the buffers capacity?

Before we’d have to copy all the current data’s buffer to a bigger buffer.

Terrible for performance, especially when there’s gonna be a LOT of data in the pipeline.

ES15 gives us a solution to this problem: Resizable array buffers.

JavaScript
const resizableBuffer = new ArrayBuffer(1024, { maxByteLength: 1024 ** 2, }); // ✅ resize to 2048 bytes resizableBuffer.resize(1024 * 2);

4. Asynchronous upgrades

Atomics.waitAsync(): Another powerful async coding feature in ES2024:

It’s when 2 agents share a buffer…

And agent 1 “sleeps” and waits for agent 2 to complete a task.

When agent 2 is done, it notifies using the shared buffer as a channel.

JavaScript
const sharedBuffer = new SharedArrayBuffer(4096); const bufferLocation = new Int32Array(sharedBuffer); // ✅ initial value at buffer location bufferLocation[37] = 0x1330; async function doStuff() { // ✅ agent 1: wait on shared buffer location until notify Atomics.waitAsync(bufferLocation, 37, 0x1330).then((r) => {} /* handle arrival */); } function asyncTask() { // ✅ agent 2: notify on shared buffer location const bufferLocation = new Int32Array(sharedBuffer); Atomics.notify(bufferLocation, 37); }

You’d be absolutely right if you thought this similar to normal async/await.

But the biggest difference: The 2 agents can exist in completely different code contexts — they only need access to the same buffer.

And: multiple agents can access or wait on the shared buffer at different times — and any one of them can notify to “wake up” all the others.

It’s like a P2P network. async/await is like client-server request-response.

JavaScript
const sharedBuffer = new SharedArrayBuffer(4096); const bufferLocation = new Int32Array(sharedBuffer); bufferLocation[37] = 0x1330; // ✅ received shared buffer from postMessage() const code = ` var ia = null; onmessage = function (ev) { if (!ia) { postMessage("Aux worker is running"); ia = new Int32Array(ev.data); } postMessage("Aux worker is sleeping for a little bit"); setTimeout(function () { postMessage("Aux worker is waking"); Atomics.notify(ia, 37); }, 1000); }`; async function doStuff() { // ✅ agent 1: exists in a Worker context const worker = new Worker( 'data:application/javascript,' + encodeURIComponent(code) ); worker.onmessage = (event) => { /* log event */ }; worker.postMessage(sharedBuffer); Atomics.waitAsync(bufferLocation, 37, 0x1330).then( (r) => {} /* handle arrival */ ); } function asyncTask() { // ✅ agent 2: notify on shared buffer location const bufferLocation = new Int32Array(sharedBuffer); Atomics.notify(bufferLocation, 37); }

5. Regex v flag & set operations

A new feature to make regex patters much more intuitive.

Finding and manipulating complex strings using expressive patterns — with the help of set operations:

JavaScript
// A and B are character class, like [a-z] // difference: matches A but not B [A--B] // intersection: matches both A & b [A&&B] // nested character class [A--[0-9]]

To match ever-increasing sets of Unicode characters, like:

  • Emojis: 😀, ❤️, 👍, 🎉, etc.
  • Accented letters: é, à, ö, ñ, etc.
  • Symbols and non-Latin characters: ©, ®, €, £, µ, ¥, etc

So here we use Unicode regex and the v flag to match all Greek letters:

JavaScript
const regex = /[\p{Script_Extensions=Greek}&&\p{Letter}]/v;

Final thoughts

Overall 2024 was significant leap for JavaScript with several features essential for modern development.

Empowering you to write cleaner code with greater conciseness, expressiveness, and clarity.

Incredible infinite scroll with JavaScript

Infinite scroll: Loading more and more content as the user scrolls down to the end.

No need for pagination + increases time spent on the site

With simple JavaScript we can recreate this easily:

We start with the basic HTML:

HTML
<div id="load-trigger-wrapper"> <!-- Grid of images> <div id="image-container"></div> <!-- Intersection Observer observes this --> < <div id="load-trigger"></div> </div> <!-- Number of loading images --> <div id="bottom-panel"> Images: &nbsp;<b><span id="image-count"></span> &nbsp;</b>/ &nbsp;<b><span id="image-total"></span></b> </div>

Now it’s time to detect scrolling to the end with the Intersection Observer API:

JavaScript
const loadTrigger = document.getElementById('load-trigger'); // ... const observer = detectScroll(); // ... // Detect when function detectScroll() { const observer = new IntersectionObserver( // Callback also runs after observe() (entries) => { for (const entry of entries) { // ... loadMoreImages(); // ... } }, // Set "rootMargin" because of #bottom-panel height // 30px upwards from the bottom { rootMargin: '-30px' } ); // Start watching #load-trigger div observer.observe(loadTrigger); return observer; }

Now let’s show the initial skeleton images:

JavaScript
const imageClass = 'image'; const skeletonImageClass = 'skeleton-image'; // ... // This function would make requests to an image server function loadMoreImages() { const newImageElements = []; // ... for (let i = 0; i < amountToLoad; i++) { const image = document.createElement('div'); // 👇 Display each image with skeleton-image class image.classList.add(imageClass, skeletonImageClass); // Include image in container imageContainer.appendChild(image); // Store in temp array to update with actual image when loaded newImageElements.push(image); } // ... }
CSS
.image, .skeleton-image { height: 50vh; border-radius: 5px; border: 1px solid #c0c0c0; /* Three per row, with space for margin */ width: calc((100% / 3) - 24px); /* Initial color before loading animation */ background-color: #eaeaea; /* Grid spacing */ margin: 8px; /* Fit into grid */ display: inline-block; } .skeleton-image { transition: all 200ms ease-in; /* Contain ::after element with absolute positioning */ position: relative; /* Prevent overflow from ::after element */ overflow: hidden; } .skeleton-image::after { content: ""; /* Cover .skeleton-image div*/ position: absolute; top: 0; right: 0; bottom: 0; left: 0; /* Setup for slide-in animation */ transform: translateX(-100%); /* Loader image */ background-image: linear-gradient(90deg, rgba(255, 255, 255, 0) 0, rgba(255, 255, 255, 0.2) 20%, rgba(255, 255, 255, 0.5) 60%, rgba(255, 255, 255, 0)); /* Continue animation until image load*/ animation: load 1s infinite; } @keyframes load { /* Slide-in animation */ 100% { transform: translateX(100%) } }

Update skeleton images

We get colors instead of images.

JavaScript
function loadMoreImages() { // ... // Create skeleton images and stored them in "newImageElements" variable // Simulate delay from network request setTimeout(() => { // Colors instead of images const colors = getColors(amountToLoad); for (let i = 0; i < colors.length; i++) { const color = colors[i]; // 👇 Remove skeleton loading indicator and show color newImageElements[i].classList.remove(skeletonImageClass); newImageElements[i].style.backgroundColor = color; } }, 2000); // ... } function getColors(count) { const result = []; let randUrl = undefined; while (result.length < count) { // Prevent duplicate images while (!randUrl || result.includes(randUrl)) { randUrl = getRandomColor(); } result.push(randUrl); } return result; } function getRandomColor() { const h = Math.floor(Math.random() * 360); return `hsl(${h}deg, 90%, 85%)`; }

Stop infinite scroll

This is a demo so we’ll have a artificial number of images like 50.

JavaScript
const imageCountText = document.getElementById('image-count'); // ... let imagesShown = 0; // ... function loadMoreImages() { // ... const amountToLoad = Math.min(loadLimit, imageLimit - imagesShown); // Load skeleton images... // Update skeleton images... // Update image count imagesShown += amountToLoad; imageCountText.innerText = imagesShown; if (imagesShown === imageLimit) { observer.unobserve(loadTrigger); } }

Optimize performance with throttling

By using a throttle function to only allow new loadings within a certain time.

JavaScript
let throttleTimer; // Only one image batch can be loaded within a second const throttleTime = 1000; // ... function throttle(callback, time) { // Prevent additional calls until timeout elapses if (throttleTimer) { console.log('throttling'); return; } throttleTimer = true; setTimeout(() => { callback(); // Allow additional calls after timeout elapses throttleTimer = false; }, time); }

By calling throttle() in the Intersection Observer’s callback with a time of 1000, we ensure that loadMoreImages() is never called multiple times within a second.

JavaScript
function detectScroll() { const observer = new IntersectionObserver( (entries) => { // ... throttle(() => { loadMoreImages(); }, throttleTime); } } }, // ... ); // ... }

OpenAI’s new AI agent will change everything

The new OpenAI operator agent will change the world forever.

This is going to be a real AI agent that actually works — unlike gimmicks like AutoGPT.

Soon AI will be able to solve complex goals with lots of interconnected steps.

Completely autonomous — no continuous prompts — zero human guidance apart from dynamic input for each step.

Imagine you could just tell ChatGPT “teach me French” and that’ll be all it needs…

  • Analyzing your French level with a quick quiz
  • Crafting a comprehensive learning plan
  • Setting phone and email reminders to help you stick to your plan…
Not quite there yet 😉

This is basically the beginning of AGI — if it isn’t already.

And when you think of it this is already what apps like Duolingo try to do — solving complex problems.

But an AI agent will do this in far more comprehensive and personalized way — intelligently adjusting to the user’s needs and changing desires.

You can say something super vague like “plan my next holiday” and instantly your agent gets to work:

  • Analyzes your calendar to know the best next holiday time
  • Figures out someone you’ll love from previous conversations that stays within your budget
  • Books flights and sets reservations according to your schedule

This will change everything.

Which is why they’re not the only ones working on agents — the AI race continues…

We have Google apparently working on “Project Jarvis” — an AI agent to automate web-based tasks in Chrome.

Automatically jumping from page to page and filling out forms and clicking buttons.

Maybe something like Puppeteer — a dev tool programmers use to make the browser do stuff automatically — but it isn’t hard-coded and it’s far more flexible.

Anthropic has already released their own AI agent in Claude 3.5 Sonnet — a groundbreaking “computer use” feature.

Google and Apple will probably have a major advantage over OpenAI and Anthropic though — cause of Android and iOS.

Gemini Android and Apple Intelligence could seamlessly switch between all your mobile apps for a complex chain of actions.

Since they have deep access to the OS they could even use the apps without having to open them visually.

They’ll control system settings.

You call the Apple Intelligence agent, “Send a photo of a duck to my Mac”, and it’ll generate an image of a duck, turn on Airdrop on iPhone, send the photo and turn Airdrop back off.

But the most power all these agents will have comes from the API interface — letting you build tools to plug into the agent.

Like you can create a “Spotify” tool that’ll let you play music from the agent. Or a “Google” tool to check your mails and plan events with your calendar.

So it all really looks promising — and as usual folks like Sam Altman are already promising the world with it.

AI agents may well be the future—personalized, autonomous, and powerful. They’ll revolutionize how we learn, plan, and interact. The race is on.

We may see devastating job-loss impacts in several industries — including software development…

Let’s see how it goes.

Bye bye Apple Intelligence — Gemini for iPhone is amazing 😲

Apple Intelligence isn’t coming to like 90% of iPhones and everyone is pissed…

So no better time for Google to jump on this and finally push out their chatbot for iPhone.

And they didn’t disappoint on features — live conversation, deep app integration, stunning image generation…

I’ve been using the web app ever since they were calling Bard and it’s been great so I was pretty keen on this.

Gemini isn’t aMaZiNg, but it works well for getting up-to-date info, unlike ChatGPT that’s stuck in April 2023 and doesn’t known when to search the web.

Impressively Gemini has already skyrocketed to the top of charts in Productivity.

Or maybe not so impressive since it’s Google and it’s only #2 — but then #1 is ChatGPT so…

The first thing I noticed is the clean minimalist interface, quite similar to the one on desktop.

You have more than enough space for core things like input.

It searches the entire web but responds incredibly quickly to give detailed answers to complex questions.

Spark your creativity and generate ideas for stories, poems, or scripts.

The image generation has drastically improved with their new Imagen 3 engine:

Refine images easily — but it’s changing it other parts of the image:

Gemini Live is a powerful standout feature, enabling real-time interactive conversations.

It provides context-based responses and adapts to the flow of the conversation.

Looks like they did their based to simulate a live conversation with a human — no chat or text feedback confirming what you say. Although they save the chat history.

The voices aren’t robotic and have decent intonation.

One cool thing is it intelligently figures out how when you’re not done speaking — like in a real convo.

Me: What is the umm…

Gemini: What is the what?

Me: I don’t know

Gemini: That’s totally fine…

Me: Shut up

Gemini: I’m sorry I’m not comfortable engaging in conversation that’s disrespectful

Me: I love you

Gemini: Thank you for your kind words…

You can control it from the Notification Centre.

It integrates seamless with all your other Google services — I asked it to check my latest email and it was on point.

Final thoughts

It’ll be exciting to see what Gemini can do in the future.

Of course it has no chance with deep integration with iOS like Apple Intelligence.

But it’s a versatile and intelligent AI Assistant worth checking out.

The new M4 Macbook Pro is a MONSTER

This beast just got upgraded to the M4 chip and it’s more dangerous than ever.

This is the greatest value for money you’ll ever get in a Macbook Pro.

Especially as Apple finally caved to give us 16 GB RAM in base models for the same price.

The unbelievable performance of the M4 chip makes it every bit as deadly as the new M4 Mac Mini and iMac — yet ultraportable and lightweight.

Starting with a 10-core CPU and 10-core GPU.

Image source: theverge.com

Can’t compare with the Macbook Air on portability tho — lightest PC I’ve ever felt.

Apple says M4 is up to 3 times faster than M1 — but M1 is still very good so don’t go rushing to throw your money at Tim Cook.

In practice you’ll probably only notice a real difference for long tasks in heavy apps — like the Premier Pro 4K export in the benchmarks we just saw.

M4 is also an unbelievable 27 times faster than Intel Core i7 in tasks like video processing:

Core i7 used to be a big deal back then!

So imagine how much faster M4 Pro Max would be than the Intel Core i7?

Yeah their M4 processor comes in 3 tiers: M4, M4 Pro, and M4 Max.

The base model also comes with much as 3 Thunderbolt ports — unlike the 2 in previous base models.

Thunderbolt ports looks just like USB-C but with much faster data transfer speeds — and obviously light years ahead of USB-A.

With Thunderbolt 5 you get incredible speeds of up to 120Gb/s — available in Pro models with M4 Pro and M4 Max.

Along with a standard MagSafe 3 charging port and an SXDC card slot to easily import images from digital cameras.

Plus a headphone jack and a HDMI port.

Definitely geared for the pros, and pretty packed compared to the ultraminimalist MacBook Air:

Btw every wondered why Apple still puts headphone jacks in Macs?

It’s because wired headphones give maximum audio quality and have zero lag — something that’s essential for the Pros. Perfect audio.

And perfect video too — with a sophisticated 12 mega-pixel Center Stage camera.

Center Stage makes sure you’re always at the center of the recording even as you move around.

Crystal clear Liquid Retina display in two great sizes

  • 14 inch — 3024 x 1964
  • 16 inch — 3456 x 2234

imo Take 14 over 16 — It’s more than enough screen space.

You know I thought the 13 inch MacBook Air would be small but it turned out perfect and I was happy not to go with the 15.

My previous 15.6″ PC now seems humongous and too much for a laptop. 16″ seems insane.

Better you get a huge external monitor:

But on it’s own, it’s a monstrously good laptop for coding and other heavy tasks:

The base plan starts at $1599 for 16 GB RAM and 512 GB SSD with M4 chip with many lethal upgrade options:

  • M4 16 GB RAM and 1 TB SSD — $1799
  • M4 24 GB RAM and 1 TB SSD — $1999
  • M4 Pro 24 GB and 512 GB SSD — $1999
  • M4 Pro 24 GB and 1 TB SSD — $2399
  • M4 Max 36 GB RAM and 1 TB SSD — $3199 🤯

Overall the M4 MacBook Pro strikes the perfect balance of power, sleek design, and value, making it an excellent choice for professionals seeking the ideal portable workstation.

Microsoft is getting even more desperate with AI 🤦‍♂️

Microsoft going all in on the AI bandwagon…

Bing, Edge, Windows… now it’s Notepad’s turn.

Image source: bleepingcomputer.com

“Custom rewrite” — tweak tone, format and length:

Image source: theverge.com

Not bad but I doubt most people will use it.

Most people just use Notepad as a simple text editor to hold temp info and other super short-term stuff… not for this.

And wouldn’t it have been much better if it was just a text input to rewrite the text however we want flexibly?

Even good old Paint will be getting AI soon — “generative erase” (lol)

So you can remove any object from the photo and it’ll automagically create a seamless background.

❌ Before erase:

Image source: bleepingcomputer.com

✅ After erase:

Image source: bleepingcomputer.com

Two more for the growing list of MS products possessed with the AI spirit.

Even their Surface devices are all about AI now:

Even their Android keyboard app 😂

Remember this?

When Bing Chat first came out — an interesting chatbot getting a lot of attention that could have finally made a dent in Google’s numbers.

Only for them to brutally degrade it into your everyday chatbot.

Then they brought their annoying Copilot button to Edge — one more setting to change whenever I newly install it.

Then they brazenly replaced the NEW TAB button with this garbage in their mobile apps. That was the last straw for me — no more Edge on Android/iOS.

Imagine depriving users of easy access to such a fundamental action in a browser because of AI.

Imagine the horror of a Camera app where you see a Copilot button where the Snap button should be.

Luckily I don’t use Windows anymore so I won’t have to deal with their Copilot in Windows garbage:

And their aggressive marketing has certainly robbed a lot of people the wrong way — like how they did when Edge went Chromium.

Lol… someone was mad.

It’s just insane how many companies jumped on the AI bandwagon ever since ChatGPT.

Notion, Spotify, Zapier, Canva… even Apple finally caved.

Everything is AI now. Even the most mundane procedural algorithm to automate something is AI lol.

No doubt many AI upgrades have been like the recent Google Search Gen AI — that probably decimated traffic of millions of sites out there.

But a great of them add very little value — clearly just to prey on the emotions of users and investors.

But one is certain, this AI hype isn’t stopping anytime soon.

Let’s see how long until the so-called AGI comes around.