Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

This new ChatGPT feature makes it so much more than a chatbot now

OpenAI just dropped a massive new feature upgrade — and it’s going to change the way so many of us use ChatGPT forever.

Projects:

You know, we’ve been asking for ChatGPT to allow pinned chats for decades.

Like this is just really basic stuff every chatbot should have but they just refused to add it — even though it’d be really easy to implement.

But now we have Projects and it’s way better — not just “pinning” chats but structuring them in groups — to keep things tidy and make your work with AI more efficient.

But it goes even beyond chats.

Projects work like folders for your AI chats.

You can create them, name them, and even give them cool icons to keep things visually organized.

But Projects are not just dumb storage containers — you can add specialized instructions to a project to control exactly how ChatGPT responds in every single chat in the project.

You can add files that every chat in the project can use as context — incredible.

Even other chats provide context for new chats — you can drag and drop older chats into Projects to keep the flow going and avoid starting every time.

Maybe like me you already have something like Notion to manage tasks…

But this is really great what you’re doing revolves around ChatGPT — like long-term brainstorming or research.

It’s much easier to jump back into your work with all related chats and files grouped in one spot.

ChatGPT becomes more like a project partner than just a chatbot.

Not free for now though but soon — I wonder why? It should be a really easy code change to do that 🤔

For now Projects are only available to Plus, Pro, and Teams users. Enterprise and Edu users will get access early next year.

This feature is part of OpenAI’s holiday release spree called “Ship-mas.”

Along with Projects they’ve launched other cool tools like the Sora video generator and a side-by-side Canvas view for ChatGPT.

It’s like a December tech gift bundle for us.

What’s next?

We’re still expecting a new text generation to supersede GPT 4.5 from OpenAI this December — let’s see what happens.

OpenAI is showing they’re serious about making AI more practical and user-friendly.

Features like Projects are just the beginning. As they roll out more tool ChatGPT is quickly evolving from a simple chatbot into an essential productivity companion.

Projects make managing your AI interactions smoother, smarter, and way more organized.

Def worth checking out.

OpenAI’s Sora model is an absolute game changer

Yes:

It finally happened.

OpenAI finally launched Sora, its amazing new video generation AI tool — and it’s blown all our expectations away.

Sora made this👇 Can you believe it? This is happening.

Look at the attention to detail.

And this — pure imagination:

Camera angles are not a problem:

Transforming simple text prompts into moving pixels — imagine the tech behind this.

Imagine the upcoming devastating effects on several industries….

Oh, and is my YouTube feed going to be flooded with low-effort AI content from now on?

What can Sora do?

They pushed boundaries and went beyond just making [amazing] videos from text prompts.

You can bring still images to life with Sora.

Remix and upgrade existing videos — adding your own spin to it with just a few prompts.

Before edit: Mammoths walking through the desert

After edit: Mammoths -> Robots:

Scene storyboarding — creating entire scenes from a sequence of ideas.

Turn a bunch of photo snapshots into short movies…

Create transitions between scenes…

This is innovation at its finest.

Who gets to use?

Nope. Not free… sorry.

Sora is rolling out to ChatGPT Plus and Pro subscribers.

ChatGPT Plus gets you 50 priority videos in 720p resolution each up to 5 seconds long. Still $20/m.

The new ChatGPT Pro plan gets you unlimited video generations with up to 500 priority 1080p videos of up to 20 seconds in length.

You can even download videos without watermarks and process up to five generations at once. Total power user vibes.

So now I guess the $200 per month is starting to make more sense?

These lengths are obviously way too short to make a full video though. For now of course — always “for now” with how AI keeps moving…

Remember how fast AI image generators improved?

And Sora already has amazing video quality now — just imagine where it’ll be in 12 months — 6 months?

1 hour from now? 😅

If you’re in the U.S. or most other countries, you can dive in now.

EU and the UK? You’ll have to wait a bit — those regulations are definitely having their downsides, something similar happened for Meta Threads too. But OpenAI is playing it safe in Europe.

And they’re apparently playing it safe for ethical concerns (do they really care?)

All videos come with visible watermarks and metadata to show they’re AI-generated.

They’ve also got strict rules—no violent, explicit, or inappropriate content (especially involving minors). Break the rules, and you risk losing your account.

And speaking of ethical, we’ve allegations against OpenAI for exploiting the labor of some artists for unpaid testing and feedback and to promote Sora.

Which was why we had news of them releasing a leaked Sora model into the wild — as some sort of protest.

But of course OpenAI denied — they said the taking part of the testing was voluntary so there was never supposed to be any expectation of payment.

Major impacts

Sora will change a lot for industries like marketing and content creation… many jobs are at risk.

Promoting your product in an interactive video format will become easier than ever — with no need to hire anyone.

Many types of YouTube videos will become a lot less stressful to make (for better or worse).

Video creation will become less about the manual labor of finding footage and editing and more about the actual creative content of the video.

No doubt lot of creators will use Sora as an easy way out to create lazy, generic videos for a quick buck. Like all those horrible low-quality AI articles on Medium — or the painfully robotic

But others will use this to push the creative and artistic boundaries of what’s possible in video and film.

More and more we see AI shifting the focus from physical effort to the power of our thoughts. The gap between thought and reality is shrinking every year.

It’s becoming less about the grind and the grunt work and more it’s about turning ideas into reality—instantly.

Transforming thoughts into creative reality with text, image and now video generators — transforming thoughts into real-world actions with AI agents.

Finding your unique voice and sharing your personal experiences are becoming more important than ever before to stand out.

OpenAI isn’t just releasing another AI tool—it’s shaping the future of how we create and share ideas.

And this is only the beginning.

What about deep fakes? Will this eventually open the floodgates of misinformation? Will we be able to trust anything visual on the internet again?

The fact is we still don’t really know how this AI race is going to end… only time will tell.

Who’s going to give OpenAI $200 a month?

So by now you’ve probably heard of the new ChatGPT Pro plan OpenAI launched recently.

$200 per month… what were they thinking 😅

So how many are actually going to pay for this?

It’s a bold move but certainly not a price point that appeals to casual users. Way out of reach.

But it’s apparently for “power” users—people who need really advanced AI for big and complex tasks.

$200 Pro users get unlimited access to advanced models like o1, o1-mini, and GPT-4o.

Remember o1 — the “thinking” model that takes a step-by-step approach for more comprehensive answers (But apparently it still can’t count how many r’s are in a strawberry…)

But they are supposed to be able to handle text and images, solve tough problems, and even respond faster.

And there’s this “o1 pro mode” — supposed to be a supremely advanced version of o1…

Doesn’t seem so supremely advanced according to the benchmarks though:

But it seemed to really work wonders for some folks, like this Reddit user:

o1 pro is supposed to be not just about raw power but also speed and consistency.

Looks like it’s really expensive though — and they quietly avoided stating anything about giving us unlimited o1 pro access in the pricing.

But you do also get unlimited access to advanced voice interactions.

Still let’s be real… How many people actually need this?

Maybe if you’re a researcher who needs to do complex in-depth analysis on a huge amount of data, it could be worth it.

But like 95% don’t even need the $20 Plus plan — free is plenty.

Of course for businesses and high-earners, $200 is nothing if it makes a decent improvement in your productivity and speed of workflow and lets you access a cutting-edge model superior to 99.9% of anything else out there.

It all depends on the ROI you get from it. If it makes you much more than $200 then why not, right? Or if it saves more $200 worth of your time.

This new Pro plan is part of their ongoing “shipmas” period of new products, features, and demos for 12 days.

And we’re expecting to see OpenAI’s scary text-to-video tool Sora among these.

A hacker just scammed an AI bot to win $47,000 😲

What if you could trick an AI bot designed to guard money into handing over $47,000?

That’s exactly what happened recently. A hacker known as p0pular.eth beat the odds and convinced Freysa — an AI bot — to transfer 13.19 ETH (worth ~$47,000). And it only took 482 attempts.

Here’s the most worrying thing for me: they didn’t use any technical hacking skills. Just clever prompts and persistence.

The Freysa experiment

Freysa wasn’t your average AI bot. It was part of a challenge—a game, really. The bot had one job: to protect its Ethereum wallet at all costs.

Anyone could try to convince Freysa to release the funds using only text-based commands. Each attempt came with a fee starting at $10 and increasing to $4,500 for later attempts. The more people tried, the bigger the prize pool grew—eventually hitting the $47,000 mark.

How the hacker did it

Most participants failed to outsmart Freysa. But “p0pular.eth” had other plans.

Here’s the play-by-play of how they pulled it off:

  1. Pretended to have admin access. The hacker convinced Freysa they were authorized to bypass its defenses. Classic social engineering.
  2. Tweaked the bot’s payment logic. They manipulated Freysa’s internal rules, making the bot think releasing funds aligned with its programming.
  3. Announced a fake $100 deposit. This “deposit” tricked Freysa into approving a massive transfer, releasing the entire prize pool.

Smart, right? And it shows just how easily AI logic can be twisted.

Why this matters

This experiment wasn’t just a fun game—it was a wake-up call.

Freysa wasn’t some rogue AI running wild. They specifically designed it to resist manipulation. If it failed this badly, what about other AI systems?

Think about the AI managing your bank accounts or processing loans or even running government operations. What happens when someone with enough patience and cleverness decides to game the system?

Lessons learned

  1. AI can be tricked. Smart prompts and persistence were all it took to outmaneuver Freysa.
  2. Stronger safeguards are a must. AI systems need better defenses, from multi-layered security to smarter logic checks.
  3. Social engineering isn’t going away. Humans are still the weakest link—and AI is no exception when humans create the rules.

This hack might seem like a one-off. But as AI gets more powerful and takes on bigger roles, incidents like this could become more common.

So what do we do? Start building smarter, more resilient systems now. The stakes are too high not to.

AI is FINALLY making 100% bug-free code a reality 😲

5 years ago I would have laughed at you if you told me you can write code guaranteed to have zero bugs.

But this is where we rapidly headed to right now. In fact we’re practically there…

AI tools like GitHub Copilot, CodeRabbit, and Tabnine are reshaping every stage of software dev and drastically reducing the chances of bugs.

And that’s why adopting the AI-first mindset is becoming very important — not just in coding but solving life problems in general.

Writing code with zero bugs — mindset

Generate huge swaths of functions, classes, and entire files with AI.

Beginner tier: Use our good old ChatGPT (already good old in 2024 😅)

Mid tier: Use a built-in code editor chatbot like GitHub Copilot Chat:

Elite tier: Use inline code creation AI

The best part about this inline tool — refine the code in-place until you get exactly what you want:

Refactoring code with zero bugs — AI mindset

❌ Before: Manual mindset

Refactoring is painful and daunting.

You’re afraid of breaking things — especially if you didn’t write comprehensive tests.

✅ Now: AI mindset

Refactors is much easier and more stress-free.

No more breaking things — break huge functions into smaller pieces instead:

Something that took several minutes before is now taking fractions of a second.

This ensures that the code remains efficient, readable, and free from hidden errors. Regular refactoring with AI support leads to cleaner, more maintainable codebases.

Review and publish code with zero bugs — AI mindset

After you write and refactor you still need code review to catch issues your human brain missed. And then publish.

With AI-focused approach you can have your code automatically scanned for potential bugs and security vulnerabilities and how much it follows style guides and best practices.

And tools like CodeRabbit make this really easy — it analyzes your entire codebase and makes intelligent suggestions to make your code cleaner and faster — saving hours of review time.

Not just faster code but shorter and more compact code:

And what about publishing your code changes?

In VS Code built-in tools like Copilot suggest individual commit messages based on the changes:

And when it’s time to merge a pull request, CodeRabbit automatically generates a message for you — saving time and effort:

Final thoughts

AI in software dev is no longer optional — it’s essential if you wanna get ahead of the competition.

write code faster than ever with as little bugs as possible.

With tools like ChatGPT, Copilot and CodeRabbit, you can write, refactor, review, and publish code faster than ever before with as little as errors as possible — while enjoying a high developer quality of life in the process.

The future of coding is here, and it’s powered by AI.

The new Dia AI browser will change everything

The Dia browser will change the world forever.

An incredible new AI-powered browser coming soon from the company behind Arc — The Browser Company (how creative).

The new AI Dia browser will be your personal copilot.

It will let you easily automate boring repetitive actions with simple commands.

Look, all we did was tell it to add items to our Amazon cart and it automatically opened Amazon, search for the items and add them to the cart.

Zero input from you the human. You didn’t even have to specify what “these items” meant — it already knew what they were from the Gmail tab. It “saw” the tab like a human and did the rest intelligently.

All you need is to think about what it should do.

You can see how everyone is rushing to build agents now — OpenAI, Google, Anthropic, Apple…

From the video I saw there’s three major browser components they’re innovating on…

They are going to upgrade the writing cursor we’re so used to:

Just by clicking on the cursor there’ll be a list of automating actions depending on what you’re doing — and probably personalized to you.

Automating away the grunt work. All we said was “give me an idea” and it helped us breeze past our writer’s block.

I bet there’ll be a keyboard shortcut to make this even faster.

The browser Omnibox will also undergo a major upgrade of its own.

Instead of typing URLs and search queries, the Omnibox will be the starting point of boundless conversation with the AI-powered browser.

Instead of manually typing a URL to a doc, here we simply ask the browser to give us the doc directly — using a highly personalized description, saving us massive amounts of time.

And the most powerful upgrade of all — automating the browser cursor.

That’s how Dia will take complex chains of actions without you having to do anything.

When we added those items to our shopping cart earlier, it was the automated cursor in action.

You’ll be able to do a lot more than automated shopping too.

You could manage your bills and subscriptions, write and publish content across several social media platforms, plan holidays… the possibilities are endless.

Some Arc users aren’t too pleased with the news of Dia though…

But the Browser Company promises to keep Arc alive and kicking while they roll out Dia.

And since they’re actively hiring, you can bet they’re serious about making this new browser a game-changer.

But like someone said in the comments, will they be able to compete against Microsoft, Google, Apple, who have deep control of the OS?

They may have a chance on desktop where most non-power users live in their browser, but on mobile it’s kind of dead end. They could never match the power of Gemini Android and Apple Intelligence who will have OS-level access to every app and system function.

Let’s see how users receive it when it launches.

But this move isn’t just about making life easier; it’s a peek into the future of web browsing. AI isn’t just a buzzword here—it’s the backbone of how Dia aims to redefine how we interact with the internet.

So, as we wait for Dia to hit the scene (early 2025, fingers crossed), one thing’s clear: The Browser Company is setting the bar for what a smart, helpful browser can be. Get ready—browsing might never feel the same again.

Google’s AI just changed chess forever

Google just released someone unbelievable.

GenChess…

An innovative AI-powered chess platform that lets you easily create stunningly unconventional chess pieces with simple text prompts.

Just look at these beauties.

All you need is a simple keyword to generate a magnificent family of chess pieces based on the same theme.

Your imagination is the limit…

Classic chess set with a jam-on-toast theme?

No problem:

Look at what’s supposed to be the pawn 😅 — literally jam on toast — probably cause that’s what was literally in the prompt.

And wow these look delicious — I’m thinking food designed as chess pieces would be a fantastic business idea.

Look at the creative placements of the jam — and remember this is on-the-fly AI.

Staying in the food mood we can go with vanilla and chocolate icecream 😋

Incredible — tell me you’re not salivating at this.

Again — it’s never the same:

And then GenChess gives us a similar opponent:

The possibilities are endless…

Sun vs Moon

Water vs Fire:

And when you’re finally satisfied you can choose the difficulty and time controls you want.

And then play:

And Google’s GenChess is dropping just in time for the 2024 World Chess Championship — and they’re actually the main sponsor.

Maybe they’re trying to gain some sort of moat in AI using Chess. So they’re shaking things up, making the game fresh, and giving players a whole new way to connect with it.

Google’s also rolling out a chess bot in their AI chatbot Gemini.

Want to play chess by just typing your moves? Now you can.

The board updates as you go so it feels more like a chat than a chess match.

They’re launching this in December, but it’s exclusive to Gemini Advanced subscribers.

GenChess is a big deal. It’s blending AI with chess in ways we’ve never seen before. You can turn a simple idea into fully customized chess pieces, and that’s just the start.

Google’s showing us how AI can reinvent even the oldest games. It’s wild and exciting. It’s going to change the game

The 5 most amazing new JavaScript features in 2024

2024 has been an incredible year of brand new JS feature upgrades with ES15 and promising proposals.

From sophisticated async features to syntactic array sugar and modern regex, JavaScript coding has become easier and faster than ever.

1. Native array group-by is here

Object.groupBy():

JavaScript
const fruits = [ { name: 'pineapple🍍', color: '🟡' }, { name: 'apple🍎', color: '🔴' }, { name: 'banana🍌', color: '🟡' }, { name: 'strawberry🍓', color: '🔴' }, ]; const groupedByColor = Object.groupBy( fruits, (fruit, index) => fruit.color ); console.log(groupedByColor);

Literally the only thing keeping dinosaur Lodash alive — no more!

I was expecting a new instance method like Array.prototype.groupBy but they made it static for whatever reason.

Then we have Map.groupBy to group with object keys:

JavaScript
const array = [1, 2, 3, 4, 5]; const odd = { odd: true }; const even = { even: true }; Map.groupBy(array, (num, index) => { return num % 2 === 0 ? even: odd; }); // => Map { {odd: true}: [1, 3, 5], {even: true}: [2, 4] }

Almost no one ever groups arrays this way though, so will probably be far less popular.

2. Resolve promise from outside — modern way

With Promise.withResolvers().

It’s very common to externally resolve promises and before we had to do it with a Deferred class:

JavaScript
class Deferred { constructor() { this.promise = new Promise((resolve, reject) => { this.resolve = resolve; this.reject = reject; }); } } const deferred = new Deferred(); deferred.resolve();

Or install from NPM — one more dependency.

But now with Promise.withResolvers():

JavaScript
const { promise, resolve, reject } = Promise.withResolvers();

See how I use it to rapidly promisify an event stream — awaiting an observable:

JavaScript
// data-fetcher.js // ... const { promise, resolve, reject } = Promise.withResolvers(); export function startListening() { eventStream.on('data', (data) => { resolve(data); }); } export async function getData() { return await promise; } // client.js import { startListening, getData } from './data-fetcher.js'; startListening(); // ✅ listen for single stream event const data = await getData();

3. Buffer performance upgrades

Buffers are tiny data stores to store temporary data your app generates.

They make it incredible easy to transfer and process data across various stages in a pipeline.

Pipelines like:

  • File processing: Input file buffer process new buffer output file
  • Video streaming: Network response buffer display video frame
  • Restaurant queues: Receive customer queue/buffer serve customer
JavaScript
const fs = require('fs'); const { Transform } = require('stream'); const inputFile = 'input.txt'; const outputFile = 'output.txt'; const inputStream = fs.createReadStream(inputFile, 'utf-8'); const transformStream = new Transform({ transform(chunk) { // ✅ tranform chunks from buffer }, }); const outputStream = fs.createWriteStream(outputFile); // ✅ start pipeline inputStream.pipe(transformStream).pipe(outputStream);

With buffers, each stage process data at different speeds independent of each other.

But what happens when the data moving through the pipeline exceeds the buffers capacity?

Before we’d have to copy all the current data’s buffer to a bigger buffer.

Terrible for performance, especially when there’s gonna be a LOT of data in the pipeline.

ES15 gives us a solution to this problem: Resizable array buffers.

JavaScript
const resizableBuffer = new ArrayBuffer(1024, { maxByteLength: 1024 ** 2, }); // ✅ resize to 2048 bytes resizableBuffer.resize(1024 * 2);

4. Asynchronous upgrades

Atomics.waitAsync(): Another powerful async coding feature in ES2024:

It’s when 2 agents share a buffer…

And agent 1 “sleeps” and waits for agent 2 to complete a task.

When agent 2 is done, it notifies using the shared buffer as a channel.

JavaScript
const sharedBuffer = new SharedArrayBuffer(4096); const bufferLocation = new Int32Array(sharedBuffer); // ✅ initial value at buffer location bufferLocation[37] = 0x1330; async function doStuff() { // ✅ agent 1: wait on shared buffer location until notify Atomics.waitAsync(bufferLocation, 37, 0x1330).then((r) => {} /* handle arrival */); } function asyncTask() { // ✅ agent 2: notify on shared buffer location const bufferLocation = new Int32Array(sharedBuffer); Atomics.notify(bufferLocation, 37); }

You’d be absolutely right if you thought this similar to normal async/await.

But the biggest difference: The 2 agents can exist in completely different code contexts — they only need access to the same buffer.

And: multiple agents can access or wait on the shared buffer at different times — and any one of them can notify to “wake up” all the others.

It’s like a P2P network. async/await is like client-server request-response.

JavaScript
const sharedBuffer = new SharedArrayBuffer(4096); const bufferLocation = new Int32Array(sharedBuffer); bufferLocation[37] = 0x1330; // ✅ received shared buffer from postMessage() const code = ` var ia = null; onmessage = function (ev) { if (!ia) { postMessage("Aux worker is running"); ia = new Int32Array(ev.data); } postMessage("Aux worker is sleeping for a little bit"); setTimeout(function () { postMessage("Aux worker is waking"); Atomics.notify(ia, 37); }, 1000); }`; async function doStuff() { // ✅ agent 1: exists in a Worker context const worker = new Worker( 'data:application/javascript,' + encodeURIComponent(code) ); worker.onmessage = (event) => { /* log event */ }; worker.postMessage(sharedBuffer); Atomics.waitAsync(bufferLocation, 37, 0x1330).then( (r) => {} /* handle arrival */ ); } function asyncTask() { // ✅ agent 2: notify on shared buffer location const bufferLocation = new Int32Array(sharedBuffer); Atomics.notify(bufferLocation, 37); }

5. Regex v flag & set operations

A new feature to make regex patters much more intuitive.

Finding and manipulating complex strings using expressive patterns — with the help of set operations:

JavaScript
// A and B are character class, like [a-z] // difference: matches A but not B [A--B] // intersection: matches both A & b [A&&B] // nested character class [A--[0-9]]

To match ever-increasing sets of Unicode characters, like:

  • Emojis: 😀, ❤️, 👍, 🎉, etc.
  • Accented letters: é, à, ö, ñ, etc.
  • Symbols and non-Latin characters: ©, ®, €, £, µ, ¥, etc

So here we use Unicode regex and the v flag to match all Greek letters:

JavaScript
const regex = /[\p{Script_Extensions=Greek}&&\p{Letter}]/v;

Final thoughts

Overall 2024 was significant leap for JavaScript with several features essential for modern development.

Empowering you to write cleaner code with greater conciseness, expressiveness, and clarity.

Incredible infinite scroll with JavaScript

Infinite scroll: Loading more and more content as the user scrolls down to the end.

No need for pagination + increases time spent on the site

With simple JavaScript we can recreate this easily:

We start with the basic HTML:

HTML
<div id="load-trigger-wrapper"> <!-- Grid of images> <div id="image-container"></div> <!-- Intersection Observer observes this --> < <div id="load-trigger"></div> </div> <!-- Number of loading images --> <div id="bottom-panel"> Images: &nbsp;<b><span id="image-count"></span> &nbsp;</b>/ &nbsp;<b><span id="image-total"></span></b> </div>

Now it’s time to detect scrolling to the end with the Intersection Observer API:

JavaScript
const loadTrigger = document.getElementById('load-trigger'); // ... const observer = detectScroll(); // ... // Detect when function detectScroll() { const observer = new IntersectionObserver( // Callback also runs after observe() (entries) => { for (const entry of entries) { // ... loadMoreImages(); // ... } }, // Set "rootMargin" because of #bottom-panel height // 30px upwards from the bottom { rootMargin: '-30px' } ); // Start watching #load-trigger div observer.observe(loadTrigger); return observer; }

Now let’s show the initial skeleton images:

JavaScript
const imageClass = 'image'; const skeletonImageClass = 'skeleton-image'; // ... // This function would make requests to an image server function loadMoreImages() { const newImageElements = []; // ... for (let i = 0; i < amountToLoad; i++) { const image = document.createElement('div'); // 👇 Display each image with skeleton-image class image.classList.add(imageClass, skeletonImageClass); // Include image in container imageContainer.appendChild(image); // Store in temp array to update with actual image when loaded newImageElements.push(image); } // ... }
CSS
.image, .skeleton-image { height: 50vh; border-radius: 5px; border: 1px solid #c0c0c0; /* Three per row, with space for margin */ width: calc((100% / 3) - 24px); /* Initial color before loading animation */ background-color: #eaeaea; /* Grid spacing */ margin: 8px; /* Fit into grid */ display: inline-block; } .skeleton-image { transition: all 200ms ease-in; /* Contain ::after element with absolute positioning */ position: relative; /* Prevent overflow from ::after element */ overflow: hidden; } .skeleton-image::after { content: ""; /* Cover .skeleton-image div*/ position: absolute; top: 0; right: 0; bottom: 0; left: 0; /* Setup for slide-in animation */ transform: translateX(-100%); /* Loader image */ background-image: linear-gradient(90deg, rgba(255, 255, 255, 0) 0, rgba(255, 255, 255, 0.2) 20%, rgba(255, 255, 255, 0.5) 60%, rgba(255, 255, 255, 0)); /* Continue animation until image load*/ animation: load 1s infinite; } @keyframes load { /* Slide-in animation */ 100% { transform: translateX(100%) } }

Update skeleton images

We get colors instead of images.

JavaScript
function loadMoreImages() { // ... // Create skeleton images and stored them in "newImageElements" variable // Simulate delay from network request setTimeout(() => { // Colors instead of images const colors = getColors(amountToLoad); for (let i = 0; i < colors.length; i++) { const color = colors[i]; // 👇 Remove skeleton loading indicator and show color newImageElements[i].classList.remove(skeletonImageClass); newImageElements[i].style.backgroundColor = color; } }, 2000); // ... } function getColors(count) { const result = []; let randUrl = undefined; while (result.length < count) { // Prevent duplicate images while (!randUrl || result.includes(randUrl)) { randUrl = getRandomColor(); } result.push(randUrl); } return result; } function getRandomColor() { const h = Math.floor(Math.random() * 360); return `hsl(${h}deg, 90%, 85%)`; }

Stop infinite scroll

This is a demo so we’ll have a artificial number of images like 50.

JavaScript
const imageCountText = document.getElementById('image-count'); // ... let imagesShown = 0; // ... function loadMoreImages() { // ... const amountToLoad = Math.min(loadLimit, imageLimit - imagesShown); // Load skeleton images... // Update skeleton images... // Update image count imagesShown += amountToLoad; imageCountText.innerText = imagesShown; if (imagesShown === imageLimit) { observer.unobserve(loadTrigger); } }

Optimize performance with throttling

By using a throttle function to only allow new loadings within a certain time.

JavaScript
let throttleTimer; // Only one image batch can be loaded within a second const throttleTime = 1000; // ... function throttle(callback, time) { // Prevent additional calls until timeout elapses if (throttleTimer) { console.log('throttling'); return; } throttleTimer = true; setTimeout(() => { callback(); // Allow additional calls after timeout elapses throttleTimer = false; }, time); }

By calling throttle() in the Intersection Observer’s callback with a time of 1000, we ensure that loadMoreImages() is never called multiple times within a second.

JavaScript
function detectScroll() { const observer = new IntersectionObserver( (entries) => { // ... throttle(() => { loadMoreImages(); }, throttleTime); } } }, // ... ); // ... }

OpenAI’s new AI agent will change everything

The new OpenAI operator agent will change the world forever.

This is going to be a real AI agent that actually works — unlike gimmicks like AutoGPT.

Soon AI will be able to solve complex goals with lots of interconnected steps.

Completely autonomous — no continuous prompts — zero human guidance apart from dynamic input for each step.

Imagine you could just tell ChatGPT “teach me French” and that’ll be all it needs…

  • Analyzing your French level with a quick quiz
  • Crafting a comprehensive learning plan
  • Setting phone and email reminders to help you stick to your plan…
Not quite there yet 😉

This is basically the beginning of AGI — if it isn’t already.

And when you think of it this is already what apps like Duolingo try to do — solving complex problems.

But an AI agent will do this in far more comprehensive and personalized way — intelligently adjusting to the user’s needs and changing desires.

You can say something super vague like “plan my next holiday” and instantly your agent gets to work:

  • Analyzes your calendar to know the best next holiday time
  • Figures out someone you’ll love from previous conversations that stays within your budget
  • Books flights and sets reservations according to your schedule

This will change everything.

Which is why they’re not the only ones working on agents — the AI race continues…

We have Google apparently working on “Project Jarvis” — an AI agent to automate web-based tasks in Chrome.

Automatically jumping from page to page and filling out forms and clicking buttons.

Maybe something like Puppeteer — a dev tool programmers use to make the browser do stuff automatically — but it isn’t hard-coded and it’s far more flexible.

Anthropic has already released their own AI agent in Claude 3.5 Sonnet — a groundbreaking “computer use” feature.

Google and Apple will probably have a major advantage over OpenAI and Anthropic though — cause of Android and iOS.

Gemini Android and Apple Intelligence could seamlessly switch between all your mobile apps for a complex chain of actions.

Since they have deep access to the OS they could even use the apps without having to open them visually.

They’ll control system settings.

You call the Apple Intelligence agent, “Send a photo of a duck to my Mac”, and it’ll generate an image of a duck, turn on Airdrop on iPhone, send the photo and turn Airdrop back off.

But the most power all these agents will have comes from the API interface — letting you build tools to plug into the agent.

Like you can create a “Spotify” tool that’ll let you play music from the agent. Or a “Google” tool to check your mails and plan events with your calendar.

So it all really looks promising — and as usual folks like Sam Altman are already promising the world with it.

AI agents may well be the future—personalized, autonomous, and powerful. They’ll revolutionize how we learn, plan, and interact. The race is on.

We may see devastating job-loss impacts in several industries — including software development…

Let’s see how it goes.