ai

This new IDE just destroyed VS Code and Copilot without even trying

Wow I never thought the day I stop using VS Code would come so soon…

But the new Windsurf IDE blows VS Code out of the water — now I’ve cancelled my GitHub Copilot subscription and made it my main IDE.

And you know, when they said it was an “Agentic IDE” I was rolling my eyes at first because of all the previous hype with agents like AutoGPT.

But Windsurf shocked me.

These agentic tools are going to transform coding for sure — and not necessarily for the better — at least from the POV of software devs 😅

The agent actually makes useful changes to my code, and my prompt didn’t have to be so low-level. They call it Cascade and it saves so much time.

You see how it analyzes several areas in my code to make the changes — and this analysis can include many many files in your codebase.

And same for the changes — it’s doing the coding for you now.

Save tons of time by telling it to write your commit messages for you:

Just like Copilot it gives you code completions — that’s just expected at this point — and they’re free.

But it goes way beyond that with Supercomplete — an incredible feature that predicts not just your next line, but your next intent.

Probably inspired by a similar feature in the Cursor IDE.

It doesn’t just complete your code where the cursor is, it completes your high-level action. It “thinks” at a higher level of abstraction.

Like when you rename a variable it’ll automatically know you want to do the same for all the other references.

And not just for one variable but multiple logically related ones — which goes beyond the “rename variable” function that editors like VS Code have.

When you update a schema in your code, it’ll automatically update all the places that use it — and not just in the same file.

And how about binding to event handlers in a framework like React? Doing it for you after you create the variable?

You see how AI continues to handle more and more coding tasks with increasing levels of complexity and abstraction.

We got the low-level of code completions…

Then we got higher-level action completions with Cursor and this Windsurf Supercomplete stuff.

Now we’re starting to have full-blown agents to handle much more advanced programming tasks.

And these agents will only continue to get better.

How long until they completely take over the entire coding process?

And then the entire software development life cycle?

You know, some people say the hardest part of software development is getting total clarity from the user on what they want.

They say coding is easy but other parts of the software dev process like this initial requirements stage is hard and AI won’t be able to do it.

But this is mostly cope.

Telling an AI exactly what you want is not all that different from telling a human what you want. It’s mostly a matter of avoiding ambiguity using context or asking for more specificity — with like more detailed prompts.

AI agents are rapidly improving and will be able to autonomously resolve this lack of clarity with multi-step prompting.

Now we’re seeing what tools like Windsurf and Cursor Composer can do.

So, how to get started with Windsurf?

Windsurf comes from the same people that made that free Codeium extension for VS Code, so you’ll get it at codeium.com

And there’s a pretty good free version, but it won’t give you all the unique features that really make this IDE stand out.

You only get a limited free trial at the agentic Cascade feature — you’ll have to upgrade to at least Pro to really make the most of it.

Before the Pro price was $10 per month for unlimited Cascade use, but developers used it so much that they had to set limits with a higher price and introduce a pay-as-you-go system.

Eventually coding is going to change forever as we know it. And software developers are only going to have to adapt.

The perfect AI workout app idea I’ve been thinking about

This isn’t all of it, but it’s a nice start for sure…

There’s a humongous lot I see AI doing for fitness in the coming years I’m telling you.

Including powerful SaaS ideas to build and finally become the billionaire you were always supposed to be (and fail as always — gosh I hate SaaS).

Like when ChatGPT first broke out one of the easiest low hanging fruits was exercise generation.

First of all most fitness apps are just way too convoluted for their own good.

Asking you all sort of stupid questions in the onboarding instead of just going straight to the point.

And in the end the same old stupid pre-built manual workouts you were always gonna get. So annoying.

These clowns have no clue about user experience whatsoever.

No one should have to go through that trash anytime they install a new app.

The perfect fitness app would learn from TikTok.

Do you see how painfully simple TikTok is?

The second you open the TikTok app after installing you immediately get blasted with content.

They don’t even waste a nanosecond (or picosecond) to show the logo – I mean who gives a shit about your logo right?

When you open the perfect workout app you will instantly get blasted with a decent AI generated workout for the average person.

With all the timing reps and rest periods and demonstrations and all — that’s already standard stuff so whatever.

You can start the workout immediately!

You can regenerate — maybe even swipe up to view more AI workout types and ideas like in TikTok.

With this I already have a full fledged workout app ready to go.

Tari but what about…

Shut your smelling mouth this obviously won’t be everything — but I’m telling you this alone is more than enough.

People will know that when they tap your app they instantly get value.

The time delay between opening your bullshit and getting rewarded is nil.

Rewarded with something new.

One tap.

Why the hell do you think TikTok is so addictive?

Obviously there’s plenty of other reasons but this is a big one.

Action-reward delay — also a critical component of habit formation you know.

Then there’d be like a text input to create a workout according to my pompous ultra precise specifications. Or super vague, doesn’t matter. That’s the flexibility you get from LLMs and AI.

Start a 4-day workout split where I train like a Spartan warlord but also don’t sweat too much. I want abs, obviously, but I also don’t want to give up croissants. Prioritize aesthetics over strength. Be dramatic. Tons of jumping or whatever.

The app would use your workout completions and performance to tune the recommendations for what types of workout to suggest in the TikTok-like feed.

A whole lot more to be done but it would be a radical improvement to the trash we’re still contending with today.

It would be growth. Improvement. What I love to see.

Quality of life improved.

Great AI innovation from Meta and Google

I love to see this.

A great innovation coming soon from Meta and Zuckerberg.

Getting ready to revamp and upgrade their entire advertising system with AI from start to finish.

❌ Before:

If I want to create a Facebook or Google Ad right now, there’ll be quite some manual work to do.

I’ll have to create the assets to use for the ad – photos and videos.

Have to decide who’s best to show the ad to.

Have to write my own copy — and maybe I’ll be searching for marketing and persuasion and copywriting tips and tricks to make sure I get it right, like I’ve done in the past.

I’ll have to create multiple ad versions, constantly tweaking them in hope that at least one is good enough to convince people.

✅ After:

The AI does everything.

All I have to do tell the AI what a high-level overview of I’m trying to achieve and it takes over everything from there.

Automatically generates photos and videos for the ad.

Generates titles, descriptions, and every other copy I need.

Figures out the best demographic to show my product to on its own.

And then it will intelligently adjust the text for the copy and the image and photos used for the assets.

Imagine how revolutionary this would be — completely taking the guesswork out of promoting your products.

Look at this👇. The AI would guarantee a 100% ad strength and following all the guidelines here automatically.

The last few times I ran ads I was never perfectly confident about how details like the copy should be. But an innovation like this would take away all such uncertainty.

This will be a massive quality of life upgrade.

I’ve already seen Google Ads start incorporating things like this into the ad creation flow:

But it’s still not comprehensive enough, you know. This is just for one step.

What Meta and Zuckerberg are planning will revolutionize every single step of the ad creation process.

It would be like an autonomous AI agent that does everything.

No more need to hire any so-called marketing agencies or copywriters — ooh so I guess this would be another major industry wipeout from AI.

And I think this could also work for use cases like creating app store screenshots — as part of handling the entire app promotion process from A to Z.

I think transparency could be a real issue here though. Cause the whole thing would be like a black box since the AI would be doing everything by itself and you can’t be sure about the integrity of the whole process.

Yeah I’ve been seeing some people express concerns about this.

But I think if you don’t trust the process you can always use the current manual method and compare the hard conversion or sales results you got.

And you know we already have something like black box right now, in terms of who gets shown the ad.

We’ve always had to have a certain level of trust for the whole ad process. A trust earned from the results seen by millions of brands all over the world.

Overall it should be an immense improvement to cost efficiency and quality of life

AI coding agents are not glorified StackOverflow

It’s a common pushback I hear from the usual AI deniers.

Oh Tari why are you hyping this thing up so much lol, it’s no different from copying and pasting from StackOverflow for goodness sake. Just calm down bro.

They refuse to accept reality.

Like how could you possibly equate those two?

It’s like saying hiring a chef to cook a meal is no different from searching for recipe and doing the cooking yourself.

Just because the same knowledge is used in both cases, or what?

It makes no sense.

AI agents are not “improved Google” or “improved StackOverflow”.

Can StackOverflow build entire features from scratch spanning multiple files in your project?

Can StackOverflow do this?

Can Google fix errors in your code with something as simple as “what’s wrong with this code”?

Do they have any deep contextual access to your code to know what you could possibly mean with an instruction as vague as that?

All StackOverflow and Google can do is give you fragments of generic information — you have to do the reasoning yourself.

It‘s up to you to specialize and integrate the information into your project.

And then you still have to do testing and debugging as always.

AI agents are here to do all of these much faster with far greater accuracy.

That’s like the whole point of AI — of automation.

Massive amounts of work done in tiny fractions of effort of the manual alternative.

Faster. Easier.

Predictability. Insane personalization. Smart recommendations.

These are the things that make AI so deadly. It doesn’t matter if they use the same knowledge sources that a human would use.

It’s what would make a hypothetical AI cooking machine a danger to chef careers.

It doesn’t matter if the coding or cooking information and answers are already out there on the Internet.

Even when it comes to accessing knowledge, chatbots are still obviously better at giving it to you in a straightforward cohesive manner, after researching and synthesizing the information from different sources.

Copy and pasting from StackOverflow and Google cannot give you any of these benefits.

AI is exposing the many flaws of the broken education system

AI tools are showing us just how broken the education system really is.

I recently stumbled upon this article talking about the AI double standard in education.

Teachers are using AI to grade school papers and prepare learning plans — yet they’re banning students from using it.

ChatGPT getting banned in schools and universities tells you everything you need to know about the education system.

School doesn’t prepare anyone for the real world.

If they did then none of these tools clearly used in the real-world would be banned.

All most schools do is feed you with unnecessary or esoteric knowledge you wouldn’t need to use in the vast majority of cases.

In university they box you into a cage of specialization to meet the requirements for a job.

And for most of the jobs, you didn’t even need to know half of the things they made you learn to get the certificate.

Assignments and exams for the most part only test how much you can recall information.

They test your memory retrieval ability. They don’t test your thinking process.

They don’t test how sharp your mental models for solving problems are.

So of course if they allowed AI tools like ChatGPT, everyone would get a perfect score in every exam.

Because all the knowledge is already out there.

Actually, AI is making it pretty clear that assignments and exams shouldn’t even really exist.

If school was really about learning, then the focus would be on personalized, hands-on, interactive practice.

It wouldn’t be about knowing the “right” answers and getting a terrible grade if you don’t.

Grades wouldn’t even be a thing, at least in their current form.

School wouldn’t be about getting the correct solution if you want to “pass”, it would be about becoming someone who can solve problems — especially big picture problems that really matter in life — or at least should really matter.

And using powerful tools to help us solve the problems even easier and faster.

And what if AI could eventually solve the problem entirely without us even having to think about it?

Would it be so wrong to have AI “think” for us, especially for irrelevant problems we’d rather not handle ourselves?

Relatively boring, repetitive problems?

Problems that already have well-established and predictable methods for solving them that today’s AI’s could easily internalize and replicate.

Problems like coding, for the most part.

But sadly most schools mainly exist to take your money and produce certified drones.

They’re not about big-picture, original thinking or the pursuit of happiness.

Too idealistic, right?

Another amazing new IDE from Google — destroys VS Code

Wow this is incredible.

Google is getting dead serious about dev tooling — their new Firebase Studio is going to be absolutely insane for the future of software development.

A brand new IDE packed with incredible and free AI coding features to build full-stack apps faster than ever before.

Look at how it was intelligently prototyping my AI app with lightning speed — simply stunning.

AI is literally everywhere in Firebase Studio — right from the very start of even creating your project.

This is Project IDX on steroids.

  • Lightning-fast cloud-based IDE
  • Genius agentic AI
  • Dangerous Firebase integration and instant deployment…

It’s actually based on Project IDX but way better.

All my IDX projects are already there automatically — zero effort in migration.

And it looks like they’re going with light theme this time — vs dark IDX.

Before even opening any project Gemini is there to instantly scaffold whatever you have in mind.

Firebase Studio uses Gemini 2.5 Flash — the thinking model that’s been seriously challenging Claude and Grok since a few weeks ago.

For free.

And you can choose among their most recent models — but only Gemini (sorry).

Although looks like there could be a workaround with the Custom model ID stuff.

For project creation there’s still dozens of templates to choose from — including no template at all.

Everything runs on the cloud in Firebase Studio.

No more wasting time setting up anything locally — build and preview and deploy right from IDX.

Open up a project and loading happens instantly.

Because all the processing is no longer happening in a weak everyday PC — but now in a massively powerful data center with unbelievable speeds.

You can instantly preview every change in a live environment — Android emulators load instantly.

You’ll automatically get a link for every preview to make it easy to test and share your work before publishing.

The dangerous Firebase integration will be one of the biggest selling points of Firebase.

All the free, juicy, powerful Firebase services they’ve had for years — now here comes a home-grown IDE to tie them together in such a deadly way.

  • Authentication for managing users
  • Firestore for real-time databases
  • Cloud Storage for handling file uploads
  • Cloud Functions for server-side logic All of these are available directly from the Studio interface.

And that’s why deployment is literally one click away once you’re happy with your app.

Built-in Firebase Hosting integration to push your apps live to production or preview environments effortlessly.

Who is Firebase Studio great for?

  • Solo developers who want to quickly build and launch products
  • Teams prototyping new ideas
  • Hackathon participants
  • Educators teaching fullstack development
  • Anyone who wants a low-friction, high-speed way to build real-world apps

It especially shines for developers who already love Firebase but want a more integrated coding and deployment flow.

You can start using Firebase Studio by visiting firebase.studio. You’ll need a Google account. Once inside, you can create new projects, connect to existing Firebase apps, and start coding immediately. No downloads, no complex setup.

So this is definitely something to consider — you might start seeing local coding as old-school.

But whether you’re building your next startup or just hacking together a side project, Firebase Studio is an fast integrated way to bring your app to life.

OpenAI’s o3 and o4-mini models are bigger than you think

This is incredible.

o3 and o4-mini are massive leaps towards a versatile general purpose AI.

This is insane — the model intelligently knew exactly what the person wrote here — actually figured out it was upside down and rotated it.

They are taking things to a whole new level with complex multimodal reasoning.

This one is even more insane — it easily solved a complicated maze and accurately drew the path it took from start to finish.

With perfectly accurate code to draw the path.

Multimodal reasoning is a major step towards an AI that could understand and interact with the entire virtual or physical world in every possible way.

Imagine how much more powerful it would be when they start thinking with audio and video.

It’s a major step towards a general purpose AI that can work with any kind of data in any situation.

o3: Powerful multimodal reasoning model — deeper analysis, problem-solving, decision-making.

o4-mini: Smaller sibling of o4 — efficient but still pretty impressive.

The possibilities are endless:

  • Solve complex visual puzzles — like we saw for the maze
  • Navigate charts, graphs, and infographics
  • Perform spatial and logical reasoning grounded in visuals
  • Blend information from images and text to make better decisions

Multimodal reasoning AI isn’t just gonna write code or help you decide where to go on your next holiday.

It’ll be able to work directly with:

  • Blueprints and maps
  • Body language
  • Scientific diagrams

This will be huge for AIs that interact with the physical world.

Imagine your personal AI assistant that could infer your desires without you even having to tell it anything.

Now we mostly talk to assistants in just text format…

But with multimodal AI’s they could use input from so many other things other than the words you’re actually saying:

  • The tone of your voice (audio)
  • Your facial expression (visual)
  • Your body language (visual)

And of course still using context from your previous messages and conversations.

It could understand you at such a deep level and give ultra-personalized suggestions for whatever you ask.

OpenAI’s new GPT 4.1 coding model is insane — even destroys 4.5

Wow this is incredible.

OpenAI’s new GPT 4.1 model blows almost every other model out of the water — including GPT 4.5 (terrible naming I know).

It’s not even close — just look at what GPT 4o and GPT 4.1 produced for the exact same prompt:

❌ Before: GPT 4o

Prompt:

Make a flashcard web application.
The user should be able to create flashcards, search through their existing flashcards, review flashcards, and see statistics on flashcards reviewed.
Preload ten cards containing a Hindi word or phrase and its English translation.
Review interface: In the review interface, clicking or pressing Space should flip the card with a smooth 3-D animation to reveal the translation. Pressing the arrow keys should navigate through cards.
Search interface: The search bar should dynamically provide a list of results as the user types in a query.
Statistics interface: The stats page should show a graph of the number of cards the user has reviewed, and the percentage they have gotten correct.
Create cards interface: The create cards page should allow the user to specify the front and back of a flashcard and add to the user’s collection. Each of these interfaces should be accessible in the sidebar. Generate a single page React app (put all styles inline).

✅ Now look at what GPT 4.1 produced for the same prompt:

The 4.1 version is just way better in every way:

  • ✅ Cleaner and more intuitive inputs
  • ✅ Better feedback with the user
  • ✅ Polished UI with icons and color

It’s a massive improvement — which is why IDEs like Windsurf and Cursor quickly added GPT 4.1 support just a few hours after its release.

Major GPT-4.1 enhancements

1 million

GPT 4.1 has a breakthrough 1 million token context window.

Way higher than the previous 128,000 token limit GPT 4o could handle.

So now the model can process and understand much larger inputs:

  • Extensive documents
  • Complex codebases — leading to even more powerful coding agents

GPT 4.1 will digest the content well enough to focus on the relevant information and disregard any distractions.

Just better in every way

GPT-4.1 has proven to be better than 4o and 4.5 in just about every benchmark

How great at coding?

54.6% on SWE-bench Verified Benchmark

  • 21.4% absolute improvement over GPT-4o
  • 26.6% absolute improvement over GPT-4.5.​

Instruction following

Scored 38.3% on the Scale’s MultiChallenge benchmark

  • 10.5% absolute increase over GPT-4o

Long-context comprehension

Sets a new state-of-the-art with a 72.0% score on the Video-MME benchmark’s long, no subtitles category.

  • 6.7% absolute increase over GPT-4o

Cheaper too

Greater intelligence for a fraction of the cost. GPT-4.1 is also 26% more cost-effective than GPT-4o.

A significant decrease — which you’ll definitely feel in an AI app with many thousands of users bombarding the API every minute.

Not like most of us will ever get to such levels of scale, ha ha.

Meet Mini and Nano

OpenAI also released two streamlined versions of GPT-4.1:​

GPT-4.1 Mini

Mini still gives GPT-4o a run for its money, but better:

  • 50% less latency
  • 83% cheaper

GPT-4.1 Nano

The smallest, fastest, and most affordable model.

Perfect at low-latency tasks like classification and autocompletion.

And despite being so small, it still achieves impressive scores and outperforms GPT-4o Mini:

  • 80.1% on MMLU
  • 50.3% on GPQA
  • 9.8% on Aider polyglot coding

Evolution doesn’t stop

GPT-4 was once the talk of the town — but today it’s on its way out.

With GPT-4.1, OpenAI OpenAI plans to phase out older models:​

  • GPT-4: Scheduled to be retired from ChatGPT by April 30, 2025.​
  • GPT-4.5 Preview: Set to be deprecated in the API by July 14, 2025.​

Yes even GPT-4.5 that just came out a few weeks ago is going away soon.

Right now GPT-4.1 is only available in the API for developers and enterprise users.

GPT-5 might be delayed but OpenAI isn’t slowing down.

GPT-4.1 is a big step up—smarter, faster, cheaper, and able to handle way more context. It sets a fresh standard and opens the door for what’s coming next.

VS Code’s new AI agent mode is an absolute game changer

Wow.

VS Code’s new agentic editing mode is simply amazing. I think I might have to make it my main IDE again.

It’s your tireless autonomous coding companion with total understanding of your entire codebase — performing complex multi-step tasks at your command.

Competitors like Windsurf and Cursor have been stealing the show for weeks now, but it looks like VS Code and Copilot are finally fighting back.

It’s like to Cursor Composer and Windsurf Cascade but with massive advantages.

This is the real deal

Forget autocomplete, this is the real deal.

With agent mode you tell VS Code what to do with simple English and immediately it gets to work:

  • Analyzes your codebase
  • Plans out what needs to be done
  • Creates and edit files, run terminal commands…

Look what happened in this demo:

She just told Copilot:

“add the ability to reorder tasks by dragging”

Literally a single sentence and that was all it took. She didn’t need to create a single UI component, didn’t edit a single line of code.

This isn’t code completion, this is project completion. This is the coding partner you always wanted.

Pretty similar UI to Cascade and Composer btw.

It’s developing for you at a high level and freeing you from all the mundane + repetitive + low-level work.

No sneak changes

You’re still in control…

Agent mode drastically upgrades your development efficiency without taking away the power from you.

For every action it takes it’ll check with you when:

  • It wants to run non-default tools or terminal commands.
  • It’s about to make edits — you can review, accept, tweak, or undo them.
  • You want to pause or stop its suggestions anytime.

You’re the driver. It’s just doing the heavy lifting.

Supports that MCP stuff too

Model Context Protocol standardizes how applications provide context to language models.

Agent Mode can interact with MCP servers to perform tasks like:

  • AI web debugging
  • Database interaction
  • Integration with design systems.

You can even enhance the Agent Mode’s power by installing extensions that provide tools for the agent can use.

You have all the flexibility to select which tools you want for any particular agent action flow.

You can try it now

Agent Mode is free for all VS Code / GitHub Copilot users.

Mhmm I wonder if this would have been the case if they never had the serious competition they do now?

So here’s how to turn it on:

  1. Open your VS Code settings and set "chat.agent.enabled" to true (you’ll need version 1.99+).
  2. Open the Chat view (Ctrl+Alt+I or ⌃⌘I on Mac).
  3. Switch the chat mode to “Agent.”
  4. Give it a prompt — something high-level like:
    “Create a blog homepage with a sticky header and dark mode toggle.”

Then just watch it get to work.

When should you use it?

Agent Mode shines when:

  • You’re doing multi-step tasks that would normally take a while.
  • You don’t want to micromanage every file or dependency.
  • You’re building from scratch or doing big codebase changes.

For small edits here and there I’d still go with inline suggestions.

Final thoughts

Agent Mode turns VS Code into more than just an editor. It’s a proper assistant — the kind that can actually build, fix, and think through problems with you.

You bring the vision and it brings it life.

People are using ChatGPT to create these amazing new action figures

Wow this is incredible. People are using AI to generate insane action figures now.

Incredibly real and detailed with spot-on 3d depth and lighting.

All you need to do:

  • Go to ChatGPT 4o
  • Upload a close image of a person

Like for this one:

We’d use a prompt like this:

“Create image. Create a toy of the person in the photo. Let it be an action figure. Next to the figure, there should be the toy’s equipment, each in its individual blisters.
1) a book called “Tecnoforma”.
2) a 3-headed dog with a tag that says “Troika” and a bone at its feet with word “austerity” written on it.
3) a three-headed Hydra with a tag called “Geringonça”.
4) a book titled “D. Sebastião”.
Don’t repeat the equipment under any circumstances. The card holding the blister should be a strong orange. Also, on top of the box, write ‘Pedro Passos Coelho’ and underneath it, ‘PSD action figure’. The figure and equipment must all be inside blisters. Visualize this in a realistic way”

You don’t even need to use a photo.

For example to generate the Albert Einstein action figure at the beginning:

Create an action figure toy of Albert Einstein. Next to the figure, there should be toy’s equipment, like things associated with the toy in pop culture.
On top of the box, write the name of the person, and underneath it, a popular nickname.
Don’t repeat the equipment under any circumstances.
The figure and equipment must all be inside blisters.
Visualize this in a realistic way.

Replace Albert Einstein with any famous name and see what pop’s out.

This could be a mini-revolution in the world of toy design and collectibles.

Designing unique and custom action figures from scratch — without ever learning complex design or 3d modelling tools.

You describe your dream action figure character in simple language and within seconds you get a high-quality and detailed image.

The possibilities are endless:

Superheroes from scratch with unique powers, costumes, and lore — oh but you may have copyright issues there…

Fantasy warriors inspired by games like Dungeons & Dragons or Elden Ring

Sci-fi cyborgs and aliens designed for custom tabletop campaigns

Anime-style fighters for manga or animation projects

Personal avatars that reflect your identity or alter egos

You can tweak details in the prompts—like armor style, weapons, colors, body type, or aesthetic to get the exact look you want.

You can even turn a group of people into action figures:

As AI gets better at turning 2D to 3D and 3D printing becomes more accessible, we’re heading toward a future where anyone can:

  • Generate a toy line overnight
  • Print and ship action figures on-demand
  • Remix characters based on culture, story, or emotion

Turning imagination into real-life products and having powerful impacts on the physical world.