Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

VS Code’s new AI agent mode is an absolute game changer

Wow.

VS Code’s new agentic editing mode is simply amazing. I think I might have to make it my main IDE again.

It’s your tireless autonomous coding companion with total understanding of your entire codebase — performing complex multi-step tasks at your command.

Competitors like Windsurf and Cursor have been stealing the show for weeks now, but it looks like VS Code and Copilot are finally fighting back.

It’s like to Cursor Composer and Windsurf Cascade but with massive advantages.

This is the real deal

Forget autocomplete, this is the real deal.

With agent mode you tell VS Code what to do with simple English and immediately it gets to work:

  • Analyzes your codebase
  • Plans out what needs to be done
  • Creates and edit files, run terminal commands…

Look what happened in this demo:

She just told Copilot:

“add the ability to reorder tasks by dragging”

Literally a single sentence and that was all it took. She didn’t need to create a single UI component, didn’t edit a single line of code.

This isn’t code completion, this is project completion. This is the coding partner you always wanted.

Pretty similar UI to Cascade and Composer btw.

It’s developing for you at a high level and freeing you from all the mundane + repetitive + low-level work.

No sneak changes

You’re still in control…

Agent mode drastically upgrades your development efficiency without taking away the power from you.

For every action it takes it’ll check with you when:

  • It wants to run non-default tools or terminal commands.
  • It’s about to make edits — you can review, accept, tweak, or undo them.
  • You want to pause or stop its suggestions anytime.

You’re the driver. It’s just doing the heavy lifting.

Supports that MCP stuff too

Model Context Protocol standardizes how applications provide context to language models.

Agent Mode can interact with MCP servers to perform tasks like:

  • AI web debugging
  • Database interaction
  • Integration with design systems.

You can even enhance the Agent Mode’s power by installing extensions that provide tools for the agent can use.

You have all the flexibility to select which tools you want for any particular agent action flow.

You can try it now

Agent Mode is free for all VS Code / GitHub Copilot users.

Mhmm I wonder if this would have been the case if they never had the serious competition they do now?

So here’s how to turn it on:

  1. Open your VS Code settings and set "chat.agent.enabled" to true (you’ll need version 1.99+).
  2. Open the Chat view (Ctrl+Alt+I or ⌃⌘I on Mac).
  3. Switch the chat mode to “Agent.”
  4. Give it a prompt — something high-level like:
    “Create a blog homepage with a sticky header and dark mode toggle.”

Then just watch it get to work.

When should you use it?

Agent Mode shines when:

  • You’re doing multi-step tasks that would normally take a while.
  • You don’t want to micromanage every file or dependency.
  • You’re building from scratch or doing big codebase changes.

For small edits here and there I’d still go with inline suggestions.

Final thoughts

Agent Mode turns VS Code into more than just an editor. It’s a proper assistant — the kind that can actually build, fix, and think through problems with you.

You bring the vision and it brings it life.

People are using ChatGPT to create these amazing new action figures

Wow this is incredible. People are using AI to generate insane action figures now.

Incredibly real and detailed with spot-on 3d depth and lighting.

All you need to do:

  • Go to ChatGPT 4o
  • Upload a close image of a person

Like for this one:

We’d use a prompt like this:

“Create image. Create a toy of the person in the photo. Let it be an action figure. Next to the figure, there should be the toy’s equipment, each in its individual blisters.
1) a book called “Tecnoforma”.
2) a 3-headed dog with a tag that says “Troika” and a bone at its feet with word “austerity” written on it.
3) a three-headed Hydra with a tag called “Geringonça”.
4) a book titled “D. Sebastião”.
Don’t repeat the equipment under any circumstances. The card holding the blister should be a strong orange. Also, on top of the box, write ‘Pedro Passos Coelho’ and underneath it, ‘PSD action figure’. The figure and equipment must all be inside blisters. Visualize this in a realistic way”

You don’t even need to use a photo.

For example to generate the Albert Einstein action figure at the beginning:

Create an action figure toy of Albert Einstein. Next to the figure, there should be toy’s equipment, like things associated with the toy in pop culture.
On top of the box, write the name of the person, and underneath it, a popular nickname.
Don’t repeat the equipment under any circumstances.
The figure and equipment must all be inside blisters.
Visualize this in a realistic way.

Replace Albert Einstein with any famous name and see what pop’s out.

This could be a mini-revolution in the world of toy design and collectibles.

Designing unique and custom action figures from scratch — without ever learning complex design or 3d modelling tools.

You describe your dream action figure character in simple language and within seconds you get a high-quality and detailed image.

The possibilities are endless:

Superheroes from scratch with unique powers, costumes, and lore — oh but you may have copyright issues there…

Fantasy warriors inspired by games like Dungeons & Dragons or Elden Ring

Sci-fi cyborgs and aliens designed for custom tabletop campaigns

Anime-style fighters for manga or animation projects

Personal avatars that reflect your identity or alter egos

You can tweak details in the prompts—like armor style, weapons, colors, body type, or aesthetic to get the exact look you want.

You can even turn a group of people into action figures:

As AI gets better at turning 2D to 3D and 3D printing becomes more accessible, we’re heading toward a future where anyone can:

  • Generate a toy line overnight
  • Print and ship action figures on-demand
  • Remix characters based on culture, story, or emotion

Turning imagination into real-life products and having powerful impacts on the physical world.

7 amazing new features in the Windsurf Wave 6 IDE update

Finally! I’ve been waiting for this since forever.

Automatic Git commit messages are finally here in Windsurf — just one of the many amazing new AI features they’ve added in the new Wave 6 update that I’ll show you.

Incredible one-click deployment from your IDE, intelligent memory… these are huge jumps forward in Windsurf’s software dev capabilities.

1. One-click deployment with Netlify — super easy now

Okay this is a game changer.

They’ve teamed up with Netlify so you can literally deploy your front-end stuff with just one click right from Windsurf.

Literally no context switching. Everything stays in your editor.

Here we’re claiming the deployment to associating it with our Netlify account.

No more messing with all those settings and manual uploads.

With Wave 6 you just build your app, tell Windsurf to deploy, and it’s live instantly.

Just imagine the flow: you whip up a quick landing page with HTML, CSS, and JavaScript and instantly you reveal it to the world without even switching from your browser.

Your productivity and QOL just went up with this for real.

2. Conversation table of contents

Quickly jump back to earlier suggestions or code versions.

Super handy for those longer coding chat sessions.

3. Better Tab

They’ve upgraded Windsurf Tab inline suggestions once again to work with even more context.

Now it remembers what you searched for to give you even smarter suggestions.

And it works with Jupyter notebook now too:

4. Smarter memory

Windsurf already remembers past stuff but they’ve made it even better.

Now you can easily search, edit, and manage what it remembers — so you have more control over how the AI understands your project and what you like.

5. Automatic commit messages

Yes like I was saying — Windsurf can now write your Git commit messages for you based on the code changes.

One click, and it gives you a decent summary. Saves a bunch of time and helps keep your commit history clean.

6. Better MCP support

If you’re working in a bigger company, they’ve improved how Windsurf works with those Managed Code Providers.

7. New icons and much more

Just a little update among others — they’ve got some new Windsurf icons for you to personalize things up.

Also the ability to edit suggested terminal commands and much more.

This new one-click deployment could really shake things up for front-end development. — making getting your work out there so much faster and lets you focus on the actual coding.

As AI keeps getting better, tools like Windsurf are going to become even more important in how we build software.

Wave 6 shows they’re serious about making things simpler and giving developers smart tools to be more productive and creative. This new update could be a real game changer for AI-assisted development.

OpenAI’s new GPT-4o image generation is an absolute game changer

GPT-4o’s new image generation is destroying industries in real-time.

Not even up to a week and it’s been absolutely insane — even Sam Altman can’t understand what’s going on right now.

Things are definitely not looking too good for apps like Photoshop.

Look how amazing the layering is. Notice the before & after — it didn’t just copy and paste the girl image onto the room image, like Photoshop would do.

It’s no longer sending prompts to DALL-E behind-the-scenes — it understands the images at a deep level.

Notice how the 3d angle and lighting in the after image is slightly different — it knows it’s the same room. And the same thing for the girl image.

These are not just a bunch of pixels or a simple internal text representation to GPT-4o. It “understands” what it’s seeing.

So of course refining images is going to so much more accurate and precise now.

The prompt adherence and creativity is insane.

What are the odds that something even remotely close to this was in the training data?

It’s not just spitting out something it’s seen before — not like it ever really was like some claimed. How much it understands your prompt has improved drastically.

And yes it can now draw a full glass of wine now.

Another huge huge upgrade is how insanely good it is at understanding & generating text now.

This edit right here is incredible on so many levels…

1. Understanding the images well enough to recreate them so accurately in a completely different image style with facial expressions.

2. It understand the entire context of the comic conversation well enough to create matching body language.
Notice how the 4th girl now has her left hand pointing — which matches the fact that she’s ordering something from the bar boy.
A gesture that arguably matches the situation even better than in the previous image.
And I bet it would be able to replicate her original hand placement if the prompt explicitly asked it to.

3. And then the text generation — this is something AI image generators have been struggling with since forever — and now see how easily GPT-4o recreated the text in the bubbles.

And not only that — notice how the last girl’s speech bubble now has an exclamation point — to perfect match her facial expression and this particular situation.

And yes it can integrate text directly into images too — perfect for posters & social media graphics.

If this isn’t a total disruptor in the realm of graphics design and photoshopping and everything to do with image creation, then you better tell me what is.

It’s really exciting and that’s why we’ve been seeing so many of this type of images flood the social media — images in the style of the creative studio, Ghibli.

And also part of why they’ve had to limit to only paid ChatGPT users with support for the free tier coming soon.

They’ve got to scale the technology and make sure everyone has a smooth experience.

All in all GPT-4o image gen is a major step forward that looks set to deal a major blow to traditional image editing & graphic design tool like Photoshop & illustrator.

Greater things are coming.

Amazon’s new AI coding tool is insane

Amazon’s new Q Developer could seriously change the way developers write code.

It’s a generative AI–powered assistant designed to take a lot of the busywork out of building software.

A formidable agentic rival to GitHub Copilot & Windsurf, but with a special AWS flavor baked in — because you know, Amazon…

It doesn’t matter whether you’re writing new features or working through legacy code.

Q Developer is built to help you move faster—and smarter with the power of AWS.

I see they’re really pushing this AWS integration angle — possibly to differentiate themselves from the already established alternatives like Cursor.

Real-time code suggestions as you type — simply expected at this point, right?

It can generate anything from a quick line to an entire function — all based on your comments and existing code. And it supports over 25 languages—so whether you’re in Python, Java, or JavaScript, you’re covered.

Q Developer has autonomous agents just like Windsurf — to handle full-blown tasks like implementing a feature, writing documentation, or even bootstrapping a whole project.

It actually analyzes your codebase, comes up with a plan, and starts executing it across multiple files.

It’s not just autocomplete. It’s “get-this-done-for-me” level AI.

I know some of the Java devs among you are still using Java 8, but Q Developer can help you upgrade to Java 17 automatically.

You basically point it at your legacy mess—and it starts cleaning autonomously.

It even supports transforming Windows-based .NET apps into their Linux equivalent.

And it works for the popular IDEs like VS Code — and probably Cursor & Windsurf too — tho I wonder if it would interfere with their built-in AI features.

  • VS Code, IntelliJ, Visual Studio – Get code suggestions, inline chats, and security checks right inside your IDE.
  • Command Line – Type natural language commands in your terminal, and the CLI agent will read/write files, call APIs, run bash commands, and generate code.
  • AWS Console – Q is also built into the AWS Console, including the mobile app, so you can manage services or troubleshoot errors with just a few words.

Q Developer helps you figure out your AWS setup with plain English. Wondering why a network isn’t connecting? Need to choose the right EC2 instance? Q can guide you through that, spot issues, and suggest fixes—all without digging through endless docs.

Worried about privacy? Q Developer Pro keeps your data private and doesn’t use your code to train models for others. It also works within your AWS IAM roles to personalize results while keeping access secure.

On top of that it helps you write unit tests + optimize performance + catch security vulnerabilities—with suggestions for fixing them right away.

Amazon Q Developer isn’t just another code assistant. It’s a full-blown AI teammate.

It’s definitely worth checking out — especially if you’re deep in the AWS ecosystem.

The new Gemini 2.5 Pro is an absolute game changer

Never underestimate Google!

The new Gemini 2.5 Pro model just changed everything in the AI race.

Look at what it created with just a few sentences of prompt:

Google is no longer playing catch-up in the LLM race. They’re ahead now.

After many of you had been looking down on Gemini for so long, now look…

Back to the #1 spot they undisputedly were only a few years ago, before what happened in 2022.

Aider LLM coding evaluation

Not like Gemini 2.0 was even bad, but this is a massive massive step-up from that.

Reason, math, coding… it’s either better or seriously competing with the others at almost everything.

Everybody was praising Grok just a while ago but this Gemini 2.5 has already surpassed it in major areas.

1 million context window and 2 million coming very soon — and as always with multimodal processing for text, images, audio, and video…

Complex multi-step thinking is here…

Look at a puzzle someone used to find out just how good this new model is:

How many of us can even figure out this pattern in less than a minute?

This is complex reasoning stuff that would destroy regular models like GPT 4.5 — but Gemini took just 15 seconds to figure this out. Correctly.

Meanwhile rivals like Claude 3.7 Thinking and Grok spent well over a minute before they could get the answer.

At this point we’ve now clearly got beyond the traditional next-token prediction models — already “traditional” in 2025.

It’s also apparently insanely good at generative the perfect SVG icon for you with high precision.

And all this for immensely generous prices.

In OpenAI Playground you pay for token usage — completely free in Gemini AI Studio.

And there’s still a free trial for actual API usage.

The paid tier is already looking like it would be much cheaper than OpenAI’s o3-mini — and lightyears cheaper than the o3 monster.

Gemini 2.0 is already around 10 times cheaper than o3-mini and I doubt 2.5 would have a price that would be significantly higher.

With everything we’ve been seing in recent months it’s just so clear that OpenAI is no longer miles ahead of the rest anymore.

They will never again get the sort of undisputed dominance they got when ChatGPT first came out.

Now Google’s showing us that they were never really out — and unlike OpenAI they won’t be struggling nearly as much with losses and fundraising. Whatever losses from AI gets massively offset with the avalanche of billions from Ad money.

Way easier for them to keep prices ridiculously cheap and generous like they’ve been doing. A massive long-term financial advantage.

The relatively underwhelming release of GPT 4.5 also didn’t help matters for OpenAI. Maybe it was even supposed to be the ultimate GPT 5, but it would have a massive failure to call it that.

Once again we continue to see the shift from “raw” LLMs to thinking agents with chain-of-thought prompting.

Google is back, Gemini 2.5 is fast, and once again a major change in the dynamics of the AI race.

Redefining what’s possible and setting new expectations for the future.

Windsurf IDE just got a massive AI upgrade that changes everything

The new Windsurf IDE just got a massive new upgrade to finally silence the AI doubters.

The Wave 5 upgrade is here and it’s packed with several game changing AI features to make coding easier and faster.

So ever since Windsurf first came out, they’ve been releasing massive upgrades in something they call Waves.

Like in Wave 3 they brought tis intelligent new tab-to-jump feature that I’ve really been enjoying so far,

Just like you get inline code suggestions that you accept with Tab, now you get suggestions to jump to the place in the file where you’re most likely to continue making changes.

So like now it’s not just predicting your next line — it’s predicting your overall intent in the file and codebase at large. The context and ability grew way beyond just the code surrounding your cursor.

And things have been no different with Wave 4 and now 5 — huge huge context upgrades…

With Wave 5 Windsurf now uses your clipboard as context for code completions.

Just look at this:

It’s incredible.

We literally copied pseudocode and once we went back to start typing in the editor — it saw the Python file and function we were creating — it started making suggestions to implement the function.

And it would intelligently do the exact same thing for any other language.

It’ll be really handy when copying answers from ChatGPT or StackOverflow that you can’t just paste to drop into your codebase.

Instead of having to paste and format/edit, Windsurf automatically integrates what you copied into your codebase.

But Wave 5 goes way beyond basic copy and paste.

With this new update Windsurf now uses your conversation history with Cascade chat as context for all your code completions.

Look at this.

Cascade didn’t even give us any code snippets — but still the Tab feature understood the chat message to realize what you’re most likely to do next in the codebase.

So you see, we’re getting even more ways to edit our code from chatting with Cascade — depending on the use case.

You can make Windsurf edit the code directly with Cascade Write mode — auto-pilot vibe coding.

You can chat with Cascade and get snippets of changes that can be made — which you can accept one at a time, based on what you think is best.

Or now, you can use Cascade to get guidelines on what you need to do, then write the code yourself for fine-grained control — using insanely sophisticated Tab completions along the way to move faster than ever.

I know some of you still don’t want trust the auto-edits from vibe coding tools — this update is certainly for you.

Which ever level of control you want, Windsurf can handle it.

And it still doesn’t stop there with Wave 5 — your terminal isn’t getting left behind…

Context from the IDE terminal include all your past commands:

These AI IDEs just keep getting better and better.

The gap between between devs who use these tools and those acting like they’re irrelevant just keeps growing wider everyday.

Now, even more updates to help you spent less time on grunt coding and develop with lightning speed — while still having precise control of everything when you choose too.

Now with Wave 5 — a smarter coding companion with even more context to better predict thoughts and save you from lifting much of a finger — apart from the one to press Tab.

He used AI to apply to 1,000+ jobs and got flooded with interviews

His AI bot made thousands of job applications automatically while he slept — only for him to wake up an interview request in the morning.

Over the course of 1 month he got dozens of job interviews — over 50.

And of course he’s not the only one — we now have several services out there that can do this.

But we can build it ourselves and start getting interviews on autopilot.

Mhmm

Looking at this demo already confirms my expectation that the service would be best as a browser extension.

No not best — the only way it can work.

Lol of course no way LinkedIn’s gonna let you get all that juicy job data with a public API.

So scraping and web automation is the only way.

So now if we want to set it up for LinkedIn like it is here.

Of course we can just go ahead and start the automation — we need some important input.

Looking at the input and their data types

Skills — list of text so string array/list

Job location — string of course, even though they could be an geolocation feature to automate this away.

Number of jobs to apply — too easy

Experience level and job type – string

If I’m building my own personal bot then I can just use hardcoded variables for all these instead of creating a full-blown UI.

So once you click the button it’s going to go straight to a LinkedIn URL from the Chrome extension page

Look at the URL format:

JavaScript
linkedin.com/jobs/search/?f_AL=true&keywords=product%20management%20&f_JT=F&start=0

So it’s using some basic string interpolation to search the LinkedIn Jobs page with one of the skills from the input.

And we can rapidly go through the list with start query that paginates the items

And now this is where we start the web scraping.

We’d need to start parsing the list and items to get useful information for our application goals.

Like looking for jobs with low applications to boost our chances.

You can get a selector for the list first.

JavaScript
const listEl = document.querySelector('.scaffold-layout__list');

Now we’d need selectors to uniquely identify a particular list item.

So each list item is going to be a <li> in the <ul> in the .scaffold-layout__list list.

And we can use the data-occludable-job-id attribute as the unique identifier.

Now we can process this li to get info on the job from the list item.

JavaScript
const listEl = document.querySelector( ".scaffold-layout__list ul" ); const listItems = listEl.children; for (const item of listItems) { // ... }

Like to find jobs that have that “Be an early applicant” stuff:

JavaScript
for (const item of listItems) { const isEarlyApplicant = item.textContent.includes( "Be an early applicant" ); }

Also crucial to only find jobs with “Easy Apply” that let us apply directly on LinkedIn instead of a custom site, so we can have a consistent UI for automation.

JavaScript
for (const item of listItems) { // ... const isEasyApply = item.textContent.includes("Easy Apply"); }

We can keep querying like this for whatever specific thing we’re looking for.

And when it matches we click to go ahead with applying.

JavaScript
for (const item of listItems) { // ... if (isEarlyApplicant && isEasyApply) { item.click(); } }

Find a selector for the Easy Apply button to auto-click:

JavaScript
for (const item of listItems) { // ... if (isEarlyApplicant && isEasyApply) { item.click(); const easyApplyButton = document.querySelector( "button[data-live-test-job-apply-button]" ); // delay here with setTimeout or something easyApplyButton.click(); } }

Do the same for the Next button here:

And then again.

Now in this late stage things get a bit more interesting.

How would we automate this?

The questions are not all from a pre-existing list — they could be anything.

And looks like service I’ve been showing demos of didn’t even do this step properly — it just put in default values — 0 years of experience for everything — like really?

Using an LLM would be a great idea — for each field I’ll extract the question and the expected answer format and give this to the LLM.

So that means I’ll also need to provide resume-ish data to the LLM — so

I’ll use our normal UI inspection to get the title.

JavaScript
function processFreeformFields() { const modal = document.querySelector( ".jobs-easy-apply-modal" ); const fieldset = modal.querySelector( "fieldset[data-test-form-builder-radio-button-form-component]" ); const questionEl = fieldset.querySelector( "span[data-test-form-builder-radio-button-form-component__title] span" ); const question = questionEl.textContent; }
JavaScript
function processFreeformFields() { // ... const options = fieldset.querySelectorAll( "div[data-test-text-selectable-option]" ); const labels = []; for (const option of options) { labels.push(option.textContent); } }

Now we have all the data for the LLM:

JavaScript
async function processFreeformFields() { // ... const questionEl = fieldset.querySelector( "span[data-test-form-builder-radio-button-form-component__title] span" ); const question = questionEl.textContent; const options = fieldset.querySelectorAll( "div[data-test-text-selectable-option]" ); const labels = []; for (const option of options) { labels.push(option.textContent); } const { answers } = await sendQuestionsToLLM([ { question, options: labels, type: "select" }, ]); const answer = answers[0]; const answerIndex = labels.indexOf(answer); const option = options[answerIndex]; option?.click(); }

So we can do something like this for all the fields to populate the input object we’ll send to the LLM in the prompt.

And with this all that’s left is clicking the button to submit the application

I could also do some parsing to detect when this success dialog shows.

I could use a JavaScript API like Mutation Observer to detect when this element’s visibility property changes — like the display property changing from 'none' to 'block'.

But with this I’d have successfully auto-applied for the job and I can move on to the next item in the jobs list.

JavaScript
const listEl = document.querySelector( ".scaffold-layout__list ul" ); const listItems = listEl.children; for (const item of listItems) { // ... }

Or the next page of jobs

JavaScript
const page = 1; // 0 -> 25 to go to next page const nextPage = `linkedin.com/jobs/search/?f_AL=true&keywords=${skills[0]}&f_JT=F&start=${page * 25}`

This boomerang coding puzzle may mess with your head

I found this interesting boomerang puzzle that many people could get wrong the first time.

Especially if you ask them to do it in one line! — as we going to do later in this article.

JavaScript
const arr = [4, 9, 4, 6, 3, 8, 3, 7, 5, -5, 5]; console.log(countBoomerangs(arr)); // 3

It’s all about counting the “boomerangs” in a list. Can you do it?

Maybe you first want to do what the hell a boomerang is right?

Okay so it’s like, any section of the list with 3 numbers where the first and last digits repeat: like[1, 2, 1]:

So how many boomerangs can you see in [4, 9, 4, 6, 3, 8, 3, 7, 5, -5, 5]?

It’s 3

  1. [4, 9, 4]
  2. [3, 8, 3]
  3. [5, -5, 5]

So the puzzle is to write an algorithm to find this pattern throughout the list.

But here’s where it gets tricky: The algorithm should also count overlapping boomerangs — like in [1, 5, 1, 5, 1, 5, 1] we have FIVE boomerangs — not two — I even thought it was three at first — no it’s freaking five.

So how do we go about this?

My first instinct is to loop through the list and and then when we get up to 3 items we can do the calculation.

It’s one of those situations where we need to keep track of previous values in the loop at every step of the loop.

So in every loop we’ll have one value for the current item, the previous items, and the one before that.

How do we keep track of all the items?

For the current items it’s super easy of course — it’s just the current iter variable:

JavaScript
countBoomerangs([1, 2, 1, 0, 3, 4, 3]); function countBoomerangs(arr) { let curr; for (item of arr) { curr = item; console.log(`curr: ${curr}`); } }

What about keeping tracking of the previous variable?

We can’t use the current iter variable anymore — we need to store the iter variable from one iteration just before the next one starts.

So what we do — store the value of curr just before the loop ends:

JavaScript
countBoomerangs([1, 2, 1, 0, 3, 4, 3]); function countBoomerangs(arr) { let curr; let prev; for (item of arr) { curr = item; console.log(`curr: ${curr}, prev: ${prev}`); prev = curr; } }

Or we can actually do it at the point just before the next loop starts:

JavaScript
countBoomerangs([1, 2, 1, 0, 3, 4, 3]); function countBoomerangs(arr) { let curr; let prev; for (item of arr) { prev = curr; curr = item; console.log(`curr: ${curr}, prev: ${prev}`); } }

It has to be these two points — either just before the loop starts or just before it ends.

Just before the loop starts really meaning before we update curr to the current variable.

Just before it ends really meaning after we finished using the stale prev variable — before we update it — to be stale again the next iteration — do you get? “Stale” because it’s always going to be out of date, it’s supposed to be.

And what about the previous one before this previous one relative to the current variable? The previous previous variable?

We use the exact same logic — although there’s something you’re going to need to watch out for…

So to track the previous previous variable I need to set prev2 to the stale stale iter variable — if you know what I mean, ha ha…

So like before either at the beginning of the loop like this:

JavaScript
countBoomerangs([1, 2, 1, 0, 3, 4, 3]); function countBoomerangs(arr) { let curr; let prev; let prev2; for (item of arr) { // Stale stale iter variable (stale of the stale) prev2 = prev; // Just stale (one stale) prev = curr; // Before update to current (non-stale) curr = item; console.log(`curr: ${curr}, prev: ${prev}, prev2: ${prev}`); } }

But what about at the end?

We have to maintain the order: prev2->prev->curr — we always need to update prev2‘s to the prev‘s stale value before update prev to curr‘s stale value.

JavaScript
function countBoomerangs(arr) { let curr; let prev; let prev2; for (item of arr) { curr = item; // Stale stale iter variable (stale of the stale) console.log(`curr: ${curr}, prev: ${prev}, prev2: ${prev}`); // Before update to current (non-stale) prev2 = prev; // Just stale (one stale) prev = curr; } }

So finally we’ve been able to track all these 3 variables to check for the boomerang.

Now all that’s left is to make the check in every loop and update a count — pretty straightforward stuff:

JavaScript
function countBoomerangs(arr) { // ... let count = 0; for (item of arr) { prev2 = prev; prev = curr; curr = item; if (prev2 === curr && prev !== curr) { count++; } } return count; }
JavaScript
console.log(countBoomerangs([1, 2, 1, 0, 3, 4, 3])); // 2 console.log(countBoomerangs([0, 0, 5, 8, 5, 2, 1, 2])); // 2

And it automatically works for overlapping boomerangs too:

JavaScript
console.log(countBoomerangs([1, 5, 1, 5, 1, 5, 1])); // 5

But isn’t there an easier way?

I noticed we could have just use a traditional for loop and use the iter counter get the sub-list up to 3 numbers back:

Either this:

JavaScript
function countBoomerangs2(arr) { for (let i = 0; i < arr.length; i++) { const prev2 = arr[i - 2]; const prev = arr[i - 1]; const curr = arr[i]; console.log( `curr: ${curr}, prev: ${prev}, prev2: ${prev}` ); } }

or this:

JavaScript
function countBoomerangs2(arr) { for (let i = 0; i < arr.length; i++) { const [prev2, prev, curr] = arr.slice(i - 2, i + 1); console.log(`curr: ${curr}, prev: ${prev}, prev2: ${prev}`); } }

I’ll add a check for when the iter counter goes up to 3 numbers to avoid the undefined stuff. This would definitely throw a nasty out-of-range error in stricter language like C#.

JavaScript
function countBoomerangs2(arr) { for (let i = 0; i < arr.length; i++) { // ✅ Check length if (arr.length > 2) { const [prev2, prev, curr] = arr.slice(i - 2, i + 1); console.log( } } }

And the comparison will be just as before:

JavaScript
function countBoomerangs2(arr) { let count = 0; for (let i = 0; i < arr.length; i++) { if (arr.length > 2) { const [prev2, prev, curr] = arr.slice(i - 2, i + 1); if (prev2 === curr && prev !== curr) { count++; } } } return count; }

The best part about this alternative is it lets us set up a beautiful one-liner solution with functional programming constructs like JavaScript’s reduce().

JavaScript
function countBoomerangs2(arr) { return arr.reduce( (count, _, i, arr) => arr[i - 2] === arr[i] && arr[i - 1] !== arr[i] ? count + 1 : count, 0 ); }

Short and sweet.

Okay maybe not so short — but now just one single multi-line statement.

Puzzle solved.

This new AI IDE from Google is an absolute game changer

This new IDE from Google is seriously revolutionary.

Project IDX is in a completely different league from competitors like VS Code or Cursor or whatever.

It’s a modern IDE in the cloud — packed with AI features.

I was not surprised to see this sort of thing coming from Google — with their deep-seated hatred for local desktop apps.

Loading your projects from GitHub and then install dependencies instantly without any local downloading.

It’s a serious game changer if your local-based IDE is hoarding all your resources in your normie PC — like VS Code does a lot.

Tari for the last time, VS Code is a code editor and not an IDE!! Learn the difference for goodness sake!!!

Ah yes, a code editor that eats up several gigabytes of RAM and gobbles up all your battery life that your OS itself starts complaining bitterly.

Such a lightweight code editor.

Most certainly not an IDE.

Anyway, so I could really see the difference between VS Code and IDX in my past PC.

Like when it came to indexing files in a big project, to enable language features like Intellisense & variable rename.

VS Code would sometimes take forever to load and it might not even load fully — bug?

I would have to reload the window multiple times for the editor to finally get it right.

But with IDX it was amazing. The difference was clear.

Once I loaded the project I had the all the features ready. Everything happened instantly.

Because all the processing was no longer happening in a weak everyday PC but now in a massively powerful data center with unbelievable speeds.

Having a remote server take care of the heavy lifting drastically cuts down on the work your local PC has to handle.

Including for project debugging that take lots of resources — like Android Emulator tests.

The Android Studio Emulator couldn’t even run in my past PC without crashing miserably, so seeing the IDX emulator spring into life effortlessly with near-zero delay was pretty exciting.

Templates are another awesome convenience — just start up any project with all the boilerplate you need — you don’t even need to fire up the CLI.

And no you’re stuck with those templates there — you can just start from a blank template and customize it as much as you want — like you would in local editors.

Another huge part of IDX is AI of course — but lol don’t think they’ll let you choose between models like all those others.

It’s Gemini or nothing. Take it or leave it.

Not like it’s nearly as bad as some people say — or bad at all.

And look, it indexes your code to give you responses from all the files in the codebase — something that’s becoming a standard feature you’d expect across editors.

And it looks like it has some decent multi-step agentic AI editing features.

I was impressed — It tried creating the React app and it failed because there were already files in the folder — then see what happened…

It intelligently knew that it should delete all the files in the project and then try again.

It automatically handled a failure case that I didn’t tell it about beforehand and got back on track.

And guess what?

After it tried deleting the files and it didn’t work — cause I guess .idx can’t be deleted — it then decided to create an empty subfolder to create the React project in.

I never said anything about what to do about non-empty files in the folder in this case, it just knew. It didn’t keep trying blindly for something that just wasn’t working.

Pretty impressive.

Okay but it did fail partially miserably when it came to create the React file.

It put the CSS code in the JSX file where it was obviously not supposed to be.

So clearly whatever model they’re using for this still can’t compare to Claude. Cause Claude-powered Windsurf would never make this mistake.

But not like the CSS itself was bad.

But of course this will only continue to improve. And Clause will also get even better — as you might have seen from the recent Claude 3.7 release.

So even if you stick with your local IDE, IDX is still a solid, rapidly improving, AI-powered alternative for writing code faster than ever.