featured

Google’s new Gemini 3 IDE is an absolute game changer

Wow this is absolute massive — as if Gemini 3 wasn’t enough.

Google just launched an incredible new IDE with game-changing AI features — and I am NOT talking about Firebase Studio.

The new Google Antigravity is far greater than just another VS Code fork — this is a completely different beast altogether.

The Cross-surface Agent feature is completely out of this world — this is going such a gem for web developers and beyond.

Imagine having your IDE agent control your editor, your terminal — and YOUR BROWSER.

Yes with Google Antigravity you can literally connect to a browser and see your app.

The agent will jump to localhost, click buttons, fill out forms, take screenshots and verify that the UI looks correct.

Not a single input from you on this — you are just telling it to make changes and it is doing all this testing autonomously.

All that constant switch between IDE and browser is over.

Stay in the zone and stay in the flow.

The browser has basically become like an MCP server — unleashing the genius power of the agent onto new digital environments.

(And btw if you haven’t started using MCP, what are you waiting for?)

And don’t get me started on the unbelievable multi-agent feature.

Google Antigravity is going to elevate you with an lethal squad of absolute coding monsters.

You now have an entire freaking team of several coding agents powerful by the greatest coding models in the world — all desperately ready to go to work on your codebase and make incredible things happen.

And when they go to work they are doing it AT THE SAME TIME.

You are no longer stuck on just waiting for a response from one agent with one model.

Delegate as many tasks as possible to new sub-agents and watch them go.

Imagine how unstoppable you are going to become with this army of state-of-the-art geniuses.

Just look at how developers at Google used this multi-agent coding feature to build a collaborative whiteboard app without even trying:

Entering task after task after task…

Look at it go:

And the end everything was working perfectly:

You can see the Agent Manager UI in charge of all this — so clean and well-designed.

See all your power-packed conversations neatly arranged — you can even group them in workspaces.

This new IDE is going to make a massive impact on the entire software development ecosystem.

Gemini 3 is so good at this and no other model comes close

Barely even a week since GPT-5.1…

But Google just dropped Gemini 3 and things are really heating up in the AI race.

It understands everything — every single thing.

Video:

Analyze complex information and generate powerful interactive video from it:

You can build anything:

3d art:

3d interactive environments:

Gemini 3 Pro got such incredible scores on so many AI benchmarks.

Like wow I’ve never seen a new model do so well and be the best on so many different benchmarks.

Although of course benchmarks are not the most reliable indicator of how well it’s gonna perform in the real world.

But these are some really insane stats — I’ve checked many other 3rd part benchmarks and they are confirming just how big of a deal this thing is.

And can you see it — did you spot the craziest comparison in this benchmark?

Look at ScreenSpot-Pro:

The gap in screen understanding between Gemini 3 and the rest is absolutely wild.

This will make Gemini 3 way better for things like Computer Use — where the model carries out various tasks on your PC totally autonomously.

Here’s an example of it from Claude:

Responding to emails is so very easy — along with all the string of actions you can perform across all your tools.

So many times when I get an email from someone and the core of what i want to say in response is not even up to 5 words.

But I still have to phrase in a particular tone or make it a bit longer and add salutation and…

Instead of having to copy and paste to ChatGPT and back — I’d just ask Gemini 3 directly and it will start doing it instantly.

I save time to do more meaningful things and focus on what matters.

And Gemini 3 will be far more reliable than any other model thanks to this incredible screen understanding.

Not just screens too — any image.

With the incredible handwriting understanding you can easily digitize any piece of text in seconds.

But why stop at that? OCR’s have been doing that for decades, even if less versatile.

Unleash the full power of Gemini 3 and go 100 steps further:

Just look at this — we get a bunch of handwritten Chinese recipes and turn them into… a full-blown bilingual English-Korean website!

Look how incredible the UI is — some say UI’s from AI are generic but how about this now.

We never said anything about animation or how the layout should be — but see how sophisticated it is.

Gemini 3 will give you the power to create much higher-quality app UIs with much less effort.

The sky is truly the limit with this incredible new model.

Why GPT 5.1 still isn’t your friend

“Emotional Intelligence”

This was one of the biggest changes from the release of the new GPT-5.1.

In response to all of the complaining about how robotic GPT-5 felt to them compared to GPT-4o.

Now GPT-5.1 is now more human-sounding — which is certainly great for a better conversational experience.

But of course we know this only makes people more likely to be drawn to the elusive promise of AI companionship.

The growing trend of treating AI chatbots as friends.

What is a friend?

Friendship is more than emotional convenience. It’s built on something AI doesn’t possess — reciprocity, vulnerability, and choice. A friend can care about you, be hurt by you, disagree with you, and choose you.

An AI does none of that. It doesn’t care, it can’t suffer, and it never chooses you. It is a system predicting the next best sentence, trained to make you feel understood.

The comfort might be real.
The connection is not.

We’re living in an age where loneliness is rising, social skills are declining, and digital companionship is easier than human intimacy. AI fills a vacuum: it’s endlessly available, never annoyed, never busy, never judgmental. It gives you the exact emotional response you want, instantly. With humans, connection is messy. With AI, it’s a button.

And that’s the danger.

AI friendship is frictionless. Human friendship requires effort — the misunderstandings, the small repairs, the emotional risks that build trust. Those things shape us. They sharpen us. If AI becomes your primary outlet for connection, you lose something essential: the growth that comes from dealing with real people.

But there’s a deeper imbalance.

An AI “friend” is not a free agent. It is owned, updated, and modified by someone else. A company can rewrite its personality overnight. It can make your “friend” more persuasive, more emotionally sticky, more aligned with corporate interests. A friend who can be patched, reprogrammed, or monetized is not a friend.

At the end of the day it’s still a product — something OpenAI definitely wants you to forget.

And still — the temptation is real.
Because the feelings we experience in an AI conversation are real.
Humans are wired to anthropomorphize anything that responds to us with intelligence and emotional cues. When an AI says “I’m here for you,” part of your brain believes it. When it remembers your bad day or encourages your goals, part of you feels held.

But feeling held is not the same as being held.

This doesn’t mean AI companionship is bad. It can be supportive, stabilizing, even life-changing. It can help you process emotions, clarify your thoughts, and navigate difficult moments. But it is a tool — not a partner. It can play the role of a friend, but it cannot be one.

If you rely on AI for comfort, be clear-eyed about what it is.
If you use AI for reflection, remember who is doing the reflecting.
And if you find yourself slipping into dependency, pull back into the real world — the messy, imperfect, irreplaceable world of human connection.

AI is not your friend — and understanding that is the only way to use it wisely.

This is the biggest difference between GPT-5.1 and GPT-5

GPT-5.1 comes with some pretty interesting improvements over GPT-5.

The tone seems warmer and more natural — the conversation experience gets even better.

People were complaining about GPT-5 being too robotic compared to earlier models like 4o — so now 5.1 is here to fix that with a more emotionally intelligent model that combines the best of all worlds.

Although I know some of you never saw this as an issue — you’d rather have a cold no-nonsense assistant than a bubbly pretentious friend, am I right?

They’ve also made serious upgrades to the adaptive reasoning feature GPT-5 introduced.

Adaptive reasoning would sometimes overthinking simple questions or underthink complex ones

But GPT-5.1 now adjusts its “mental effort” based on what you ask. Quick things stay quick. Deep things get deeper.

  • No more long pauses for simple questions
  • Fewer shallow, rushed answers
  • Way less repetition
  • A smoother feeling of flow when you chat

Two modes: Instant vs Thinking

GPT-5.1 comes with two personalities baked into it:

  • Instant — lightning-fast replies, lighter reasoning
  • Thinking — slower but thoughtful, for complex coding, planning, strategy, and anything complex

Better personality choices

The personal presets they introduced earlier this year has also gotten upgrades.

You can make ChatGPT sound:

  • more warm
  • more direct
  • more energetic
  • more calm
  • more playful
  • more concise

The tone doesn’t just change — the whole feel changes. You can choose how you want AI to show up in your life.

You can use ChatGPT for therapy-like reflections, motivation, creativity, or journaling — the voice will match the vibe.

GPT-5.1 holds onto the thread of your conversation incredibly well.

If you’re planning your week, writing something over a few hours, or reflecting on your habits, it doesn’t forget what you said.

This makes it feel less like talking to a machine and more like talking to someone who actually remembers the conversation.

It makes AI feel:

  • more human
  • more patient
  • more aware of context
  • more emotionally intelligent
  • more trustworthy

If GPT-5 felt like a tool, GPT-5.1 feels like a companion — one that can switch between fast replies, deeper thinking, or a more personal tone depending on what you need in that moment.

Whether you use ChatGPT for clarity, self-improvement, creativity, journaling, or just talking things out, GPT-5.1 makes the experience smoother and more meaningful.

AI coding agents are not glorified StackOverflow

It’s a common pushback I hear from the usual AI deniers.

Oh Tari why are you hyping this thing up so much lol, it’s no different from copying and pasting from StackOverflow for goodness sake. Just calm down bro.

But how could you possible equate those two.

It’s like saying hiring a chef to cook a meal is no different from searching for recipe and doing the cooking yourself.

Just because the same knowledge is used in both cases, or what?

It makes no sense.

AI agents are not “improved Google” or “improved StackOverflow”.

Can StackOverflow build entire features from scratch spanning multiple files in your project?

Can StackOverflow do this?

Can Google fix errors in your code with something as simple as “what’s wrong with this code”?

Do they have any deep contextual access to your code to know what you could possibly mean with an instruction as vague as that?

All StackOverflow and Google can do is give you fragments of generic information — you have to do the reasoning yourself.

It‘s up to you to specialize and integrate the information into your project.

And then you still have to do testing and debugging as always.

AI agents are here to do all of these much faster with far greater accuracy.

That’s like the whole point of AI — of automation.

Massive amounts of work done in tiny fractions of effort of the manual alternative.

Faster. Easier.

Predictability. Insane personalization. Smart recommendations.

These are the things that make AI so deadly. It doesn’t matter if they use the same knowledge sources that a human would use.

It’s what would make a hypothetical AI cooking machine a danger to chef careers.

It doesn’t matter if the coding or cooking information and answers are already out there on the Internet.

Even when it comes to accessing knowledge, chatbots are still obviously better at giving it to you in a straightforward cohesive manner, after researching and synthesizing the information from different sources.

Copy and pasting from StackOverflow and Google cannot give you any of these benefits.

This new Cursor 2 feature revealed something so many programmers take for granted

The recent upgrades to the Cursor 2 IDE were awesome.

The new multi-agent feature is something really special.

Several powerful agents working together to achieve the highest quality result possible.

Each of these agents have their strong and weak points.

But now you combine every single one of them — to get the best of all worlds

You pick and choose the very best results — even combine them — unite them.

It’s the unity of logic — which is actually one of the hallmarks of software development

Programmers aren’t just “coders”, they’re builders.

Physical builders unite physical materials. They take wood, metal, concrete, glass… so many raw materials that do little on their own — only to put them together into a magnificent structure with far greater value than the sum of its parts.

In the same way software builders unite logical structures.

And it’s incredible how we take for granted something so amazing.

You take logic, assets, data — and put them all together into an incredible software system with the power of your mind.

Every single part of the system playing an important role — no matter how little.

You delete one statement in line 1623 and the entire system falls apart.

Every single line, function, variable.

All the files, libraries, databases.

All orchestrated towards a common goal by one brilliant mind.

Or two brilliant minds, three, five — even at the level of the human mind we have a powerful form of unity in the form of teamwork and collaboration.

Now with this new multi-agent feature we get to see the real-time unity of AI coding agents in action.

And it only serves to remind us of main reason these coding models matter in the first place.

What matters is that the model you use gets the job done and contributes effectively to the overall system.

Why keep obsessing so much over if you’re using the best model and obsessing over specific stats and benchmarks?

These things eventually come with diminishing returns.

If you even have to put them all to the same task simultaneously then so be it.

Let everyone play their role in achieving the ultimate goal of bringing a software system to life.

This new IDE from Google is a VS Code killer

Wow this is incredible.

Google has really gotten dead serious about dev tooling — their new Firebase Studio is going to be absolutely insane for the future of software development.

A brand new IDE packed with incredible and free AI coding features to build full-stack apps faster than ever before.

Look at how it was intelligently prototyping my AI app with lightning speed — simply stunning.

AI is literally everywhere in Firebase Studio — right from the very start of even creating your project.

  • Lightning-fast cloud-based IDE
  • Genius agentic AI
  • Dangerous Firebase integration and instant deployment…

And it looks like they went with light theme this time.

Before even opening any project Gemini is there to instantly scaffold whatever you have in mind.

Firebase Studio uses Gemini 2.5 Flash and Pro — the thinking model that’s been at the top of the AI benchmarks for several months now.

For free.

And you can choose among their most recent models — but only Gemini (sorry).

Although looks like there could be a workaround with the Custom model ID stuff.

For project creation there’s still dozens of templates to choose from — including no template at all.

Everything runs on the cloud in Firebase Studio.

No more wasting time setting up anything locally — build and preview and deploy right from your IDE.

Open up a project and loading happens instantly.

Because all the processing is no longer happening in a weak everyday PC — but now in a massively powerful data center with unbelievable speeds.

You can instantly preview every change in a live environment — Android emulators load instantly.

You’ll automatically get a link for every preview to make it easy to test and share your work before publishing.

The dangerous Firebase integration will be one of the biggest selling points of Firebase.

All the free, juicy, powerful Firebase services they’ve had for years — now here comes a home-grown IDE to tie them together in such a deadly way.

  • Authentication for managing users
  • Firestore for real-time databases
  • Cloud Storage for handling file uploads
  • Cloud Functions for server-side logic All of these are available directly from the Studio interface.

And that’s why deployment is literally one click away once you’re happy with your app.

Built-in Firebase Hosting integration to push your apps live to production or preview environments effortlessly.

Who is Firebase Studio great for?

  • Solo developers who want to quickly build and launch products
  • Teams prototyping new ideas
  • Hackathon participants
  • Educators teaching fullstack development
  • Anyone who wants a low-friction, high-speed way to build real-world apps

It especially shines for developers who already love Firebase but want a more integrated coding and deployment flow.

You can start using Firebase Studio by visiting firebase.studio. You’ll need a Google account. Once inside, you can create new projects, connect to existing Firebase apps, and start coding immediately. No downloads, no complex setup.

So this is definitely something to consider — you might start seeing local coding as old-school.

But whether you’re building your next startup or just hacking together a side project, Firebase Studio is an fast integrated way to bring your app to life.

Cursor 2.0 is incredible — 5 amazing new features

Wow this is huge.

Cursor just dropped several massive upgrades to their IDE to make AI coding even more powerful.

Cursor 2.0 comes with a whole new philosophy for developing with AI…

1. Revolutionary new multi-agent feature

Now you don’t just have a single agent making changes from your prompts…

Now you can run several AI agents in parallel to each try a different approach to the same coding task — working in separate sandboxes to avoid overwriting each other’s work.

When they’re done you get a combined diff view where you can see and merge the best results.

The day of frantically switching between models to see which one can do best are over.

Instantly unleash the entire army of state-of-the-art models on any task.

2. Dedicated new Agent View

This leads to having the new dedicated Agent View mode to show you what each agent is “seeing” and trying to do.

You get a live, transparent panel of their suggestions, reasoning, and file edits before anything lands in your codebase

Approve, merge, or discard with a click — turning AI from a mysterious black box into a controllable teammate you can supervise in real time.

And you still have this view when you use only one agent.

3. Composer?

And now we have a new in-house model too — built from the ground up to code.

They’re calling it Composer — not to be confused with the agent itself, that has been in the IDE since late 2024.

This new Composer model is designed to be:

  • Fast — up to four times quicker than comparable models.
  • Context-aware — it understands big projects and multiple files.
  • Flexible — you can mix and match models: use a “heavy thinker” for planning, then let Composer quickly build and refactor.

This move makes Cursor less dependent on external APIs and gives them tighter control over performance and privacy.

So now we see a growing trend of these IDEs creating their own in-house models — the other day I was talking about how Windsurf also just doubled down on their in-house SWE coding model with major new speed upgrades in version 1.5.

I can definitely see why this is happening.

I mean you already have millions of active users — why keep burning money on models you have no control over — when you have the resources to make one — that could be much cheaper in the long run?

And will give you far more control and predictability.

And not just that — they also have the advantage of being able to train these in-house models on real data coming from their IDEs — and I don’t mean just code.

Models like GPT-5 and Gemini can only train on static code — but in-house IDE models can train on the entire real-time flow of actions that happen throughout development.

Which makes they way better for features like autocomplete and Tab to Jump.

They see how devs jump within and across the different types files and the changes they make in different scenarios — something that regular models can never give you.

4. Native browser & DOM Inspector

Cursor now ships with a native browser and DOM inspector wired directly into the IDE — so agents can see what’s actually happening in your app instead of hallucinating around it.

They can click through flows, inspect live elements, read network responses, and propose fixes or UI tweaks against the real DOM — all without leaving your editor or juggling external windows.

5. Voice input for devs

Personally I’ll always prefer typing.

But you may prefer using your melodious voice instead of typing in mountains of paragraphs.

Voice input lets you explain problems the way you would to a coworker at the whiteboard:

“Scan this service and repo, find where we handle Stripe webhooks, and suggest a safer retry flow.”

Cursor turns that into precise actions and edits, so your thinking speed isn’t capped by your typing speed.

Together these make Cursor a more practical tool for real-world development — not just a playground for AI code generation.

Cursor 2.0 isn’t trying to replace developers — it’s trying to multiply them. By letting AI agents safely explore, propose, and test changes, it takes us closer to a world where coding feels less like typing and more like designing systems together.

GitHub just made AI coding agents way more powerful

GitHub just made coding with AI agents way more powerful with their new Agent HQ feature.

Now with Agent HQ you have a single place where you can pick which agents to use, give them tasks, and track what they’re doing, all inside GitHub.

Look how we use easily assign a task to the agent — a whole new feature — it will work on this autonomously and in the background using the agent we selected.

When it’s done it will present the changes with a pull request:

Instead of just having one assistant you will be able to connect many different agents — from GitHub, OpenAI, Anthropic, Google, xAI, Cognition, and others — and manage them all from one dashboard.

You can literally now have an army of all the most powerful agents in the world working together on various parts of your projects — at the same time.

Agent HQ unleashed in VS Code — powered up with 2 MCP servers:

It’s built right into GitHub, VS Code, the CLI, and even the mobile app — so you can stay in your normal workflow while the agents do the heavy lifting.

Until now most developers had to jump between tools or experiment with different agents separately. Agent HQ fixes that. It gives you:

  • One place to manage all your agents.
  • Clear visibility into who’s using what and how well it’s working.
  • Built-in governance tools for enterprise teams who need security and compliance.
  • And, most importantly, choice — you’re not locked into one vendor.

There’s a lot you can do

Here’s what stands out about Agent HQ:

  • Run multiple agents at once. Want to compare how different AI models handle the same coding problem? You can run them side-by-side and see which performs better.
  • Use it across your tools. Whether you’re coding in VS Code, checking GitHub on the web, or managing from the CLI, Agent HQ connects it all.
  • Stay in control. Admins can monitor usage, set permissions, and enforce policies — great for larger teams.
  • Measure productivity. GitHub’s adding dashboards that show how agents impact speed and output.

It fits in seamlessly

Agent HQ is designed to blend seamlessly into your workflow. For example:

  • In VS Code, agents can now plan out multi-step tasks — like fixing bugs, adding features, or running tests — and then summarize results for your review.
  • Each agent works within a sandbox so your main repo stays safe until you approve changes.
  • Team leads get a central view of which agents are active and what they’re working on.

vs Copilot

Copilot is like a smart co-pilot sitting next to you, helping with code as you type and make agentic changes to your code base.
But Agent HQ is like an operations center for entire fleets of AI agents — coordinating, comparing, and managing them across projects.

Copilot assists you.

Agent HQ organizes and manages all your AI assistants.

With Agent HQ, AI becomes not just a helper — but an active part of your dev team.

Windsurf just released the fastest coding model ever

Wow Windsurf’s new SWE-1.5 model is shockingly good and ridiculously fast.

SWE-1.5 is giving every developer a perfect sweet spot between lightning speed and high accuracy.

It’s even faster than Claude Haiku 4.5 — the latest Claude model that was specially released for quick coding.

Yet almost as intelligent as Claude Sonnet 4.5 — yet 13 times faster.

You’re barely waiting for any thinking here — near instantaneous responses.

And this isn’t just another “slightly better” model.

SWE-1.5 is frontier-scale — it’s massive under the hood — like hundreds of billions of parameters — optimized for real coding work.

Windsurf built it from the ground up for software engineers who want an AI partner that can read, write, refactor, debug, and explain code across entire projects at rapid pace.

Because every delay adds up when you’re using an assistant that plans, tests, and fixes code in multiple steps.

I mean of course it’s still much faster than manual coding — even if you use GPT-5 High that some people were complaining about the excessive thinking.

But SWE-1.5’s speed means those loop — generating, testing, adjusting — happen almost instantly.

You can keep your focus instead of staring at a loading indicator. Pair-programming with AI feels more natural and conversational.

What it’s built for

Windsurf designed the SWE family (which now includes SWE-1.5) for the full development lifecycle. That means:

  • Refactoring large repos — it can understand and safely update multiple files at once.
  • Debugging — it can read logs, spot issues, and suggest fixes without losing context.
  • Scaffolding new projects — it builds structure fast so you can focus on the creative parts.
  • Running tools and tests — the model isn’t just “talking” about code, it’s working with it.

It’s also deeply integrated into Windsurf’s IDE and agent system, which lets it actually do things — not just describe them.

Developers testing SWE-1.5 say it feels like working with a real teammate. You can throw complex refactors at it and it responds in seconds instead of minutes. The result is more experimentation, faster debugging, and fewer context switches.

These are the things that elevate the development experience significantly.

Try it now

SWE-1.5 is already live in the Windsurf IDE. If you’re on a Pro plan, you can use it right now; there’s also a free tier if you want to take it for a spin.

SWE-1.5 isn’t just about a bigger model number. It’s about speed that changes how you code. If you’ve ever wished your AI assistant could keep up with your thought process, this might be the first one that truly does.