Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

AI coding agents are not glorified StackOverflow

It’s a common pushback I hear from the usual AI deniers.

Oh Tari why are you hyping this thing up so much lol, it’s no different from copying and pasting from StackOverflow for goodness sake. Just calm down bro.

They refuse to accept reality.

Like how could you possibly equate those two?

It’s like saying hiring a chef to cook a meal is no different from searching for recipe and doing the cooking yourself.

Just because the same knowledge is used in both cases, or what?

It makes no sense.

AI agents are not “improved Google” or “improved StackOverflow”.

Can StackOverflow build entire features from scratch spanning multiple files in your project?

Can StackOverflow do this?

Can Google fix errors in your code with something as simple as “what’s wrong with this code”?

Do they have any deep contextual access to your code to know what you could possibly mean with an instruction as vague as that?

All StackOverflow and Google can do is give you fragments of generic information — you have to do the reasoning yourself.

It‘s up to you to specialize and integrate the information into your project.

And then you still have to do testing and debugging as always.

AI agents are here to do all of these much faster with far greater accuracy.

That’s like the whole point of AI — of automation.

Massive amounts of work done in tiny fractions of effort of the manual alternative.

Faster. Easier.

Predictability. Insane personalization. Smart recommendations.

These are the things that make AI so deadly. It doesn’t matter if they use the same knowledge sources that a human would use.

It’s what would make a hypothetical AI cooking machine a danger to chef careers.

It doesn’t matter if the coding or cooking information and answers are already out there on the Internet.

Even when it comes to accessing knowledge, chatbots are still obviously better at giving it to you in a straightforward cohesive manner, after researching and synthesizing the information from different sources.

Copy and pasting from StackOverflow and Google cannot give you any of these benefits.

AI is exposing the many flaws of the broken education system

AI tools are showing us just how broken the education system really is.

I recently stumbled upon this article talking about the AI double standard in education.

Teachers are using AI to grade school papers and prepare learning plans — yet they’re banning students from using it.

ChatGPT getting banned in schools and universities tells you everything you need to know about the education system.

School doesn’t prepare anyone for the real world.

If they did then none of these tools clearly used in the real-world would be banned.

All most schools do is feed you with unnecessary or esoteric knowledge you wouldn’t need to use in the vast majority of cases.

In university they box you into a cage of specialization to meet the requirements for a job.

And for most of the jobs, you didn’t even need to know half of the things they made you learn to get the certificate.

Assignments and exams for the most part only test how much you can recall information.

They test your memory retrieval ability. They don’t test your thinking process.

They don’t test how sharp your mental models for solving problems are.

So of course if they allowed AI tools like ChatGPT, everyone would get a perfect score in every exam.

Because all the knowledge is already out there.

Actually, AI is making it pretty clear that assignments and exams shouldn’t even really exist.

If school was really about learning, then the focus would be on personalized, hands-on, interactive practice.

It wouldn’t be about knowing the “right” answers and getting a terrible grade if you don’t.

Grades wouldn’t even be a thing, at least in their current form.

School wouldn’t be about getting the correct solution if you want to “pass”, it would be about becoming someone who can solve problems — especially big picture problems that really matter in life — or at least should really matter.

And using powerful tools to help us solve the problems even easier and faster.

And what if AI could eventually solve the problem entirely without us even having to think about it?

Would it be so wrong to have AI “think” for us, especially for irrelevant problems we’d rather not handle ourselves?

Relatively boring, repetitive problems?

Problems that already have well-established and predictable methods for solving them that today’s AI’s could easily internalize and replicate.

Problems like coding, for the most part.

But sadly most schools mainly exist to take your money and produce certified drones.

They’re not about big-picture, original thinking or the pursuit of happiness.

Too idealistic, right?

Another amazing new IDE from Google — destroys VS Code

Wow this is incredible.

Google is getting dead serious about dev tooling — their new Firebase Studio is going to be absolutely insane for the future of software development.

A brand new IDE packed with incredible and free AI coding features to build full-stack apps faster than ever before.

Look at how it was intelligently prototyping my AI app with lightning speed — simply stunning.

AI is literally everywhere in Firebase Studio — right from the very start of even creating your project.

This is Project IDX on steroids.

  • Lightning-fast cloud-based IDE
  • Genius agentic AI
  • Dangerous Firebase integration and instant deployment…

It’s actually based on Project IDX but way better.

All my IDX projects are already there automatically — zero effort in migration.

And it looks like they’re going with light theme this time — vs dark IDX.

Before even opening any project Gemini is there to instantly scaffold whatever you have in mind.

Firebase Studio uses Gemini 2.5 Flash — the thinking model that’s been seriously challenging Claude and Grok since a few weeks ago.

For free.

And you can choose among their most recent models — but only Gemini (sorry).

Although looks like there could be a workaround with the Custom model ID stuff.

For project creation there’s still dozens of templates to choose from — including no template at all.

Everything runs on the cloud in Firebase Studio.

No more wasting time setting up anything locally — build and preview and deploy right from IDX.

Open up a project and loading happens instantly.

Because all the processing is no longer happening in a weak everyday PC — but now in a massively powerful data center with unbelievable speeds.

You can instantly preview every change in a live environment — Android emulators load instantly.

You’ll automatically get a link for every preview to make it easy to test and share your work before publishing.

The dangerous Firebase integration will be one of the biggest selling points of Firebase.

All the free, juicy, powerful Firebase services they’ve had for years — now here comes a home-grown IDE to tie them together in such a deadly way.

  • Authentication for managing users
  • Firestore for real-time databases
  • Cloud Storage for handling file uploads
  • Cloud Functions for server-side logic All of these are available directly from the Studio interface.

And that’s why deployment is literally one click away once you’re happy with your app.

Built-in Firebase Hosting integration to push your apps live to production or preview environments effortlessly.

Who is Firebase Studio great for?

  • Solo developers who want to quickly build and launch products
  • Teams prototyping new ideas
  • Hackathon participants
  • Educators teaching fullstack development
  • Anyone who wants a low-friction, high-speed way to build real-world apps

It especially shines for developers who already love Firebase but want a more integrated coding and deployment flow.

You can start using Firebase Studio by visiting firebase.studio. You’ll need a Google account. Once inside, you can create new projects, connect to existing Firebase apps, and start coding immediately. No downloads, no complex setup.

So this is definitely something to consider — you might start seeing local coding as old-school.

But whether you’re building your next startup or just hacking together a side project, Firebase Studio is an fast integrated way to bring your app to life.

OpenAI’s o3 and o4-mini models are bigger than you think

This is incredible.

o3 and o4-mini are massive leaps towards a versatile general purpose AI.

This is insane — the model intelligently knew exactly what the person wrote here — actually figured out it was upside down and rotated it.

They are taking things to a whole new level with complex multimodal reasoning.

This one is even more insane — it easily solved a complicated maze and accurately drew the path it took from start to finish.

With perfectly accurate code to draw the path.

Multimodal reasoning is a major step towards an AI that could understand and interact with the entire virtual or physical world in every possible way.

Imagine how much more powerful it would be when they start thinking with audio and video.

It’s a major step towards a general purpose AI that can work with any kind of data in any situation.

o3: Powerful multimodal reasoning model — deeper analysis, problem-solving, decision-making.

o4-mini: Smaller sibling of o4 — efficient but still pretty impressive.

The possibilities are endless:

  • Solve complex visual puzzles — like we saw for the maze
  • Navigate charts, graphs, and infographics
  • Perform spatial and logical reasoning grounded in visuals
  • Blend information from images and text to make better decisions

Multimodal reasoning AI isn’t just gonna write code or help you decide where to go on your next holiday.

It’ll be able to work directly with:

  • Blueprints and maps
  • Body language
  • Scientific diagrams

This will be huge for AIs that interact with the physical world.

Imagine your personal AI assistant that could infer your desires without you even having to tell it anything.

Now we mostly talk to assistants in just text format…

But with multimodal AI’s they could use input from so many other things other than the words you’re actually saying:

  • The tone of your voice (audio)
  • Your facial expression (visual)
  • Your body language (visual)

And of course still using context from your previous messages and conversations.

It could understand you at such a deep level and give ultra-personalized suggestions for whatever you ask.

OpenAI’s new GPT 4.1 coding model is insane — even destroys 4.5

Wow this is incredible.

OpenAI’s new GPT 4.1 model blows almost every other model out of the water — including GPT 4.5 (terrible naming I know).

It’s not even close — just look at what GPT 4o and GPT 4.1 produced for the exact same prompt:

❌ Before: GPT 4o

Prompt:

Make a flashcard web application.
The user should be able to create flashcards, search through their existing flashcards, review flashcards, and see statistics on flashcards reviewed.
Preload ten cards containing a Hindi word or phrase and its English translation.
Review interface: In the review interface, clicking or pressing Space should flip the card with a smooth 3-D animation to reveal the translation. Pressing the arrow keys should navigate through cards.
Search interface: The search bar should dynamically provide a list of results as the user types in a query.
Statistics interface: The stats page should show a graph of the number of cards the user has reviewed, and the percentage they have gotten correct.
Create cards interface: The create cards page should allow the user to specify the front and back of a flashcard and add to the user’s collection. Each of these interfaces should be accessible in the sidebar. Generate a single page React app (put all styles inline).

✅ Now look at what GPT 4.1 produced for the same prompt:

The 4.1 version is just way better in every way:

  • ✅ Cleaner and more intuitive inputs
  • ✅ Better feedback with the user
  • ✅ Polished UI with icons and color

It’s a massive improvement — which is why IDEs like Windsurf and Cursor quickly added GPT 4.1 support just a few hours after its release.

Major GPT-4.1 enhancements

1 million

GPT 4.1 has a breakthrough 1 million token context window.

Way higher than the previous 128,000 token limit GPT 4o could handle.

So now the model can process and understand much larger inputs:

  • Extensive documents
  • Complex codebases — leading to even more powerful coding agents

GPT 4.1 will digest the content well enough to focus on the relevant information and disregard any distractions.

Just better in every way

GPT-4.1 has proven to be better than 4o and 4.5 in just about every benchmark

How great at coding?

54.6% on SWE-bench Verified Benchmark

  • 21.4% absolute improvement over GPT-4o
  • 26.6% absolute improvement over GPT-4.5.​

Instruction following

Scored 38.3% on the Scale’s MultiChallenge benchmark

  • 10.5% absolute increase over GPT-4o

Long-context comprehension

Sets a new state-of-the-art with a 72.0% score on the Video-MME benchmark’s long, no subtitles category.

  • 6.7% absolute increase over GPT-4o

Cheaper too

Greater intelligence for a fraction of the cost. GPT-4.1 is also 26% more cost-effective than GPT-4o.

A significant decrease — which you’ll definitely feel in an AI app with many thousands of users bombarding the API every minute.

Not like most of us will ever get to such levels of scale, ha ha.

Meet Mini and Nano

OpenAI also released two streamlined versions of GPT-4.1:​

GPT-4.1 Mini

Mini still gives GPT-4o a run for its money, but better:

  • 50% less latency
  • 83% cheaper

GPT-4.1 Nano

The smallest, fastest, and most affordable model.

Perfect at low-latency tasks like classification and autocompletion.

And despite being so small, it still achieves impressive scores and outperforms GPT-4o Mini:

  • 80.1% on MMLU
  • 50.3% on GPQA
  • 9.8% on Aider polyglot coding

Evolution doesn’t stop

GPT-4 was once the talk of the town — but today it’s on its way out.

With GPT-4.1, OpenAI OpenAI plans to phase out older models:​

  • GPT-4: Scheduled to be retired from ChatGPT by April 30, 2025.​
  • GPT-4.5 Preview: Set to be deprecated in the API by July 14, 2025.​

Yes even GPT-4.5 that just came out a few weeks ago is going away soon.

Right now GPT-4.1 is only available in the API for developers and enterprise users.

GPT-5 might be delayed but OpenAI isn’t slowing down.

GPT-4.1 is a big step up—smarter, faster, cheaper, and able to handle way more context. It sets a fresh standard and opens the door for what’s coming next.

VS Code’s new AI agent mode is an absolute game changer

Wow.

VS Code’s new agentic editing mode is simply amazing. I think I might have to make it my main IDE again.

It’s your tireless autonomous coding companion with total understanding of your entire codebase — performing complex multi-step tasks at your command.

Competitors like Windsurf and Cursor have been stealing the show for weeks now, but it looks like VS Code and Copilot are finally fighting back.

It’s like to Cursor Composer and Windsurf Cascade but with massive advantages.

This is the real deal

Forget autocomplete, this is the real deal.

With agent mode you tell VS Code what to do with simple English and immediately it gets to work:

  • Analyzes your codebase
  • Plans out what needs to be done
  • Creates and edit files, run terminal commands…

Look what happened in this demo:

She just told Copilot:

“add the ability to reorder tasks by dragging”

Literally a single sentence and that was all it took. She didn’t need to create a single UI component, didn’t edit a single line of code.

This isn’t code completion, this is project completion. This is the coding partner you always wanted.

Pretty similar UI to Cascade and Composer btw.

It’s developing for you at a high level and freeing you from all the mundane + repetitive + low-level work.

No sneak changes

You’re still in control…

Agent mode drastically upgrades your development efficiency without taking away the power from you.

For every action it takes it’ll check with you when:

  • It wants to run non-default tools or terminal commands.
  • It’s about to make edits — you can review, accept, tweak, or undo them.
  • You want to pause or stop its suggestions anytime.

You’re the driver. It’s just doing the heavy lifting.

Supports that MCP stuff too

Model Context Protocol standardizes how applications provide context to language models.

Agent Mode can interact with MCP servers to perform tasks like:

  • AI web debugging
  • Database interaction
  • Integration with design systems.

You can even enhance the Agent Mode’s power by installing extensions that provide tools for the agent can use.

You have all the flexibility to select which tools you want for any particular agent action flow.

You can try it now

Agent Mode is free for all VS Code / GitHub Copilot users.

Mhmm I wonder if this would have been the case if they never had the serious competition they do now?

So here’s how to turn it on:

  1. Open your VS Code settings and set "chat.agent.enabled" to true (you’ll need version 1.99+).
  2. Open the Chat view (Ctrl+Alt+I or ⌃⌘I on Mac).
  3. Switch the chat mode to “Agent.”
  4. Give it a prompt — something high-level like:
    “Create a blog homepage with a sticky header and dark mode toggle.”

Then just watch it get to work.

When should you use it?

Agent Mode shines when:

  • You’re doing multi-step tasks that would normally take a while.
  • You don’t want to micromanage every file or dependency.
  • You’re building from scratch or doing big codebase changes.

For small edits here and there I’d still go with inline suggestions.

Final thoughts

Agent Mode turns VS Code into more than just an editor. It’s a proper assistant — the kind that can actually build, fix, and think through problems with you.

You bring the vision and it brings it life.

People are using ChatGPT to create these amazing new action figures

Wow this is incredible. People are using AI to generate insane action figures now.

Incredibly real and detailed with spot-on 3d depth and lighting.

All you need to do:

  • Go to ChatGPT 4o
  • Upload a close image of a person

Like for this one:

We’d use a prompt like this:

“Create image. Create a toy of the person in the photo. Let it be an action figure. Next to the figure, there should be the toy’s equipment, each in its individual blisters.
1) a book called “Tecnoforma”.
2) a 3-headed dog with a tag that says “Troika” and a bone at its feet with word “austerity” written on it.
3) a three-headed Hydra with a tag called “Geringonça”.
4) a book titled “D. Sebastião”.
Don’t repeat the equipment under any circumstances. The card holding the blister should be a strong orange. Also, on top of the box, write ‘Pedro Passos Coelho’ and underneath it, ‘PSD action figure’. The figure and equipment must all be inside blisters. Visualize this in a realistic way”

You don’t even need to use a photo.

For example to generate the Albert Einstein action figure at the beginning:

Create an action figure toy of Albert Einstein. Next to the figure, there should be toy’s equipment, like things associated with the toy in pop culture.
On top of the box, write the name of the person, and underneath it, a popular nickname.
Don’t repeat the equipment under any circumstances.
The figure and equipment must all be inside blisters.
Visualize this in a realistic way.

Replace Albert Einstein with any famous name and see what pop’s out.

This could be a mini-revolution in the world of toy design and collectibles.

Designing unique and custom action figures from scratch — without ever learning complex design or 3d modelling tools.

You describe your dream action figure character in simple language and within seconds you get a high-quality and detailed image.

The possibilities are endless:

Superheroes from scratch with unique powers, costumes, and lore — oh but you may have copyright issues there…

Fantasy warriors inspired by games like Dungeons & Dragons or Elden Ring

Sci-fi cyborgs and aliens designed for custom tabletop campaigns

Anime-style fighters for manga or animation projects

Personal avatars that reflect your identity or alter egos

You can tweak details in the prompts—like armor style, weapons, colors, body type, or aesthetic to get the exact look you want.

You can even turn a group of people into action figures:

As AI gets better at turning 2D to 3D and 3D printing becomes more accessible, we’re heading toward a future where anyone can:

  • Generate a toy line overnight
  • Print and ship action figures on-demand
  • Remix characters based on culture, story, or emotion

Turning imagination into real-life products and having powerful impacts on the physical world.

7 amazing new features in the Windsurf Wave 6 IDE update

Finally! I’ve been waiting for this since forever.

Automatic Git commit messages are finally here in Windsurf — just one of the many amazing new AI features they’ve added in the new Wave 6 update that I’ll show you.

Incredible one-click deployment from your IDE, intelligent memory… these are huge jumps forward in Windsurf’s software dev capabilities.

1. One-click deployment with Netlify — super easy now

Okay this is a game changer.

They’ve teamed up with Netlify so you can literally deploy your front-end stuff with just one click right from Windsurf.

Literally no context switching. Everything stays in your editor.

Here we’re claiming the deployment to associating it with our Netlify account.

No more messing with all those settings and manual uploads.

With Wave 6 you just build your app, tell Windsurf to deploy, and it’s live instantly.

Just imagine the flow: you whip up a quick landing page with HTML, CSS, and JavaScript and instantly you reveal it to the world without even switching from your browser.

Your productivity and QOL just went up with this for real.

2. Conversation table of contents

Quickly jump back to earlier suggestions or code versions.

Super handy for those longer coding chat sessions.

3. Better Tab

They’ve upgraded Windsurf Tab inline suggestions once again to work with even more context.

Now it remembers what you searched for to give you even smarter suggestions.

And it works with Jupyter notebook now too:

4. Smarter memory

Windsurf already remembers past stuff but they’ve made it even better.

Now you can easily search, edit, and manage what it remembers — so you have more control over how the AI understands your project and what you like.

5. Automatic commit messages

Yes like I was saying — Windsurf can now write your Git commit messages for you based on the code changes.

One click, and it gives you a decent summary. Saves a bunch of time and helps keep your commit history clean.

6. Better MCP support

If you’re working in a bigger company, they’ve improved how Windsurf works with those Managed Code Providers.

7. New icons and much more

Just a little update among others — they’ve got some new Windsurf icons for you to personalize things up.

Also the ability to edit suggested terminal commands and much more.

This new one-click deployment could really shake things up for front-end development. — making getting your work out there so much faster and lets you focus on the actual coding.

As AI keeps getting better, tools like Windsurf are going to become even more important in how we build software.

Wave 6 shows they’re serious about making things simpler and giving developers smart tools to be more productive and creative. This new update could be a real game changer for AI-assisted development.

OpenAI’s new GPT-4o image generation is an absolute game changer

GPT-4o’s new image generation is destroying industries in real-time.

Not even up to a week and it’s been absolutely insane — even Sam Altman can’t understand what’s going on right now.

Things are definitely not looking too good for apps like Photoshop.

Look how amazing the layering is. Notice the before & after — it didn’t just copy and paste the girl image onto the room image, like Photoshop would do.

It’s no longer sending prompts to DALL-E behind-the-scenes — it understands the images at a deep level.

Notice how the 3d angle and lighting in the after image is slightly different — it knows it’s the same room. And the same thing for the girl image.

These are not just a bunch of pixels or a simple internal text representation to GPT-4o. It “understands” what it’s seeing.

So of course refining images is going to so much more accurate and precise now.

The prompt adherence and creativity is insane.

What are the odds that something even remotely close to this was in the training data?

It’s not just spitting out something it’s seen before — not like it ever really was like some claimed. How much it understands your prompt has improved drastically.

And yes it can now draw a full glass of wine now.

Another huge huge upgrade is how insanely good it is at understanding & generating text now.

This edit right here is incredible on so many levels…

1. Understanding the images well enough to recreate them so accurately in a completely different image style with facial expressions.

2. It understand the entire context of the comic conversation well enough to create matching body language.
Notice how the 4th girl now has her left hand pointing — which matches the fact that she’s ordering something from the bar boy.
A gesture that arguably matches the situation even better than in the previous image.
And I bet it would be able to replicate her original hand placement if the prompt explicitly asked it to.

3. And then the text generation — this is something AI image generators have been struggling with since forever — and now see how easily GPT-4o recreated the text in the bubbles.

And not only that — notice how the last girl’s speech bubble now has an exclamation point — to perfect match her facial expression and this particular situation.

And yes it can integrate text directly into images too — perfect for posters & social media graphics.

If this isn’t a total disruptor in the realm of graphics design and photoshopping and everything to do with image creation, then you better tell me what is.

It’s really exciting and that’s why we’ve been seeing so many of this type of images flood the social media — images in the style of the creative studio, Ghibli.

And also part of why they’ve had to limit to only paid ChatGPT users with support for the free tier coming soon.

They’ve got to scale the technology and make sure everyone has a smooth experience.

All in all GPT-4o image gen is a major step forward that looks set to deal a major blow to traditional image editing & graphic design tool like Photoshop & illustrator.

Greater things are coming.

Amazon’s new AI coding tool is insane

Amazon’s new Q Developer could seriously change the way developers write code.

It’s a generative AI–powered assistant designed to take a lot of the busywork out of building software.

A formidable agentic rival to GitHub Copilot & Windsurf, but with a special AWS flavor baked in — because you know, Amazon…

It doesn’t matter whether you’re writing new features or working through legacy code.

Q Developer is built to help you move faster—and smarter with the power of AWS.

I see they’re really pushing this AWS integration angle — possibly to differentiate themselves from the already established alternatives like Cursor.

Real-time code suggestions as you type — simply expected at this point, right?

It can generate anything from a quick line to an entire function — all based on your comments and existing code. And it supports over 25 languages—so whether you’re in Python, Java, or JavaScript, you’re covered.

Q Developer has autonomous agents just like Windsurf — to handle full-blown tasks like implementing a feature, writing documentation, or even bootstrapping a whole project.

It actually analyzes your codebase, comes up with a plan, and starts executing it across multiple files.

It’s not just autocomplete. It’s “get-this-done-for-me” level AI.

I know some of the Java devs among you are still using Java 8, but Q Developer can help you upgrade to Java 17 automatically.

You basically point it at your legacy mess—and it starts cleaning autonomously.

It even supports transforming Windows-based .NET apps into their Linux equivalent.

And it works for the popular IDEs like VS Code — and probably Cursor & Windsurf too — tho I wonder if it would interfere with their built-in AI features.

  • VS Code, IntelliJ, Visual Studio – Get code suggestions, inline chats, and security checks right inside your IDE.
  • Command Line – Type natural language commands in your terminal, and the CLI agent will read/write files, call APIs, run bash commands, and generate code.
  • AWS Console – Q is also built into the AWS Console, including the mobile app, so you can manage services or troubleshoot errors with just a few words.

Q Developer helps you figure out your AWS setup with plain English. Wondering why a network isn’t connecting? Need to choose the right EC2 instance? Q can guide you through that, spot issues, and suggest fixes—all without digging through endless docs.

Worried about privacy? Q Developer Pro keeps your data private and doesn’t use your code to train models for others. It also works within your AWS IAM roles to personalize results while keeping access secure.

On top of that it helps you write unit tests + optimize performance + catch security vulnerabilities—with suggestions for fixing them right away.

Amazon Q Developer isn’t just another code assistant. It’s a full-blown AI teammate.

It’s definitely worth checking out — especially if you’re deep in the AWS ecosystem.