10 VS Code extensions now completely destroyed by AI & coding agents

These lovely VS Code extensions used to be so very helpful to save time and be more productive.

But this is 2025 now, and coding agents and AI-first IDEs like Windsurf have them all much less useful or completely obsolete.

1. JavaScript (ES6) code snippets

What did it do?
Provided shortcut-based code templates (e.g. typing clgconsole.log()), saving keystrokes for common patterns.

Why less useful:
AI generates code dynamically based on context and high-level goals — not just boilerplate like forof → for (...) {} and clg → console.log(...) . It adapts to your logic, naming, and intent without needing memorized triggers.

Just tell it what you want at a high-level in natural language, and let it handle the details of if statements and for loops and all.

And of course when you want more low-level control, we still have AI code completions to easily write the boilerplate for you.

2. Regex Previewer

What did it do?
Helped users write and preview complex regular expressions for search/replace tasks or data extraction.

Why less useful:
AI understands text structure and intent. You just ask “extract all prices from the string with a new function in a new file” and it writes, explains, and applies the regex.

3. REST Client

What did it do?
Let you write and run HTTP requests (GET, POST, etc.) directly in VSCode, similar to Postman.

Why less useful:
AI can intelligently run API calls with curl using context from your open files and codebase. You just say what you want to test — “Test this route with curl”.

4. autoDocString

What did it do?
Auto-generated docstrings, function comments, and annotations from function signatures.

Why obsolete:
AI writes comprehensive documentation in your tone and style, inline as you code — with better context and detail than templates ever could.

5. Emmet

Emmet allowed you to write shorthand HTML/CSS expressions (like ul>li*5) that expanded into full markup structures instantly.

Why less useful:
AI can generate semantic, styled HTML or JSX from plain instructions — e.g., “Create a responsive navbar with logo on the left and nav items on the right.” No need to memorize or type Emmet shortcuts when you can just describe the structure.

Or of course it don’t have to stop at basic HTML. You can work with files from React, Angular, Vue, and so much more.

6. Jest Snippets

What did it do?
Stubbed out unit test structures (e.g., Jest, Mocha) for functions, including basic test case scaffolding.

Why obsolete:
AI writes full test suites with assertions, edge cases, and mock setup — all custom to the function logic and use-case.

7. Angular Snippets (Version 18)

What did it do?
Generated code snippets for Angular components, services.

Why obsolete:
AI scaffolds entire components, hooks, and pages just by describing them — with fewer constraints and no need for config.

8. Markdown All in One

What did it do?
Helped structure Markdown files, offered live preview, and provided shortcuts for common patterns (e.g., headers, tables, badges).

Why less useful:
AI writes full README files — from install instructions to API docs and licensing — in one go. No need for manual structuring.

9. JavaScript Booster

What did it do?
JavaScript Booster offered smart code refactoring like converting var to const, wrapping conditions with early returns, or simplifying expressions.

Why obsolete:
AI doesn’t just refactor mechanically — it understands why a change improves the code. You can ask things like “refactor this function for readability” or “make this async and handle edge cases”, and get optimized results without clicking through suggestions.

10. Refactorix

What did it do?
These tools offered context-aware, menu-driven refactors like extracting variables, inlining functions, renaming symbols, or flipping if/else logic — usually tied to language servers or static analysis.

Why obsolete:
AI agents don’t just apply mechanical refactors — they rewrite code for clarity, performance, or design goals based on your prompt.

A mindset shift you need to start generating profitable SaaS ideas

Finding the perfect idea for a SaaS can feel like searching for a needle in a haystack.

There’s so much advice out there, so many “hot trends” to chase. But if you want to build something truly impactful and sustainable, there’s one fundamental principle to engrain in your mind: start with problems, not solutions.

It’s easy to get excited about a cool piece of technology or a clever feature. Maybe you’ve built something amazing in your spare time, and you think, “This would be great as a SaaS!” While admirable, this approach often leads to a solution looking for a problem. You’re trying to fit a square peg into a round hole, and the market rarely responds well to that.

“Cool” is great but “cool” without “useful” is… well… useless.

Instead, shift your focus entirely. Become a detective of discomfort. What irritates people? What takes too long? What’s needlessly complicated? Where are businesses bleeding money or wasting time? These are the goldmines of SaaS ideas. Every great SaaS product you can think of, from project management tools to CRM systems, was born out of a deep understanding of a specific, painful problem.

Think about it: before Slack, team communication was often fragmented across emails, multiple chat apps, and even physical whiteboards. The problem was clear: inefficiency and disorganization. Slack’s solution addressed that head-on. Before HubSpot, marketing and sales efforts were often disconnected and difficult to track. The problem was a lack of unified strategy and visibility. HubSpot built an integrated platform to solve it.

So, how do you uncover these problems? Start with your own experiences. What frustrations do you encounter in your daily work or personal life? Chances are, if you’re experiencing a pain point, others are too. Don’t dismiss those little annoyances; they can be the seeds of something big.

Next, talk to people. This is crucial. Engage with colleagues, friends, and even strangers in your target market. Ask open-ended questions. “What’s the most annoying part of your job?” “If you could wave a magic wand and eliminate one recurring task, what would it be?” Listen intently to their struggles and frustrations. Pay attention to the language they use to describe their pain.

Look for inefficiencies in existing workflows. Where do people use spreadsheets for things that clearly shouldn’t be in a spreadsheet? Where are manual processes still dominant when they could be automated? These are often indicators of ripe problem spaces.

Consider niche markets. Sometimes, the broadest problems are already being tackled by large players. But within specific industries or verticals, there might be unique pain points that are underserved. Diving deep into a niche can reveal highly specific problems that a tailored SaaS solution could effectively solve.

Don’t be afraid to validate your problem hypothesis. Before you write a single line of code, confirm that the problem you’ve identified is real, significant, and widely felt by a sufficient number of people. Will people pay to have this problem solved? That’s the ultimate validation.

Once you have a clear, well-defined problem, the solution will often emerge more naturally. Your SaaS will then be built for a specific need, rather than being a solution desperately searching for a home. This problem-first approach gives your SaaS idea a solid foundation, significantly increasing its chances of success in a competitive market. Remember, great SaaS isn’t about fancy tech; it’s about making people’s lives easier and businesses more efficient.

Microsoft shocking layoffs just confirmed the AI reality many programmers are desperately trying to deny

So it begins.

We told you AI was coming for tons of programming jobs but you refused to listen. You said it’s all mindless hype.

You said AI is just “improved Google”. You said it’s “glorified autocomplete”.

Now Microsoft just swung the axe big time. Huge huge layoffs. Thousands of software developers gone.

Okay maybe this is just an isolated event, right? It couldn’t possibly be the sign of the things to come, right?

Okay no it was just “corporate restructuring”.

Fine I won’t argue with you but you need to look at the facts.

30% of production code in Microsoft is now written by AI – not from anyone’s ass – from Satya Nadella himself (heard of the guy?).

25% of production code in Google written by AI.

Oh but I know the deniers among you will try to cope by saying it’s just template boilerplate code or unit tests that the AI writes. No they don’t write “real code” that needs “creativity” and “problem solving”. Ha ha ha.

Or they’ll say trash like, “Oh but my IDE writes my code too, and I still have my job”. Yeah I’ve seen this.

Sure because IDE tools like search & replace or Intellisense are in anyway equatable to an autonomous AI that understands your entire codebase and makes several intelligent changes across files with just a simple prompt.

Maybe you can’t really blame them since these days even the slightest bit of automation in a product is called AI by desperate marketing.

Oh yes, powerful agentic reasoning vibe coding tools like Windsurf and Cursor are no different from hard-coded algorithmic features like autocomplete, right?

I mean these people already said the agentic AI tools are no different from copying & pasting from Google. They already said it can’t really reason.

Just glorified StackOverflow right?

Even with the massive successes of AI tools like GitHub Copilot you’re still here sticking your head in your stand and avoiding seeing the writing on the wall.

VS Code saw the writing the wall and started screaming AI from the rooftops. It’s all about Copilot now.

Look now OpenAI wants to buy Windsurf for 3 billion dollars. Just for fun right?

Everybody can see the writing on the wall.

And you’re still here talking trash about how it’s all just hype.

What would it take to finally convince these people that these AI software engineering agents are the real deal?

Microsoft’s new MCP AI upgrade is a huge huge sign of things to come

This is wild.

Microsoft just released an insane upgrade for their OS that will change everything about software development — especially when Google and Apple follow suit.

MCP support in operating systems like Window is going to be an absolute game changer for how we develop apps and interact with our devices.

The potential is massive. You could build AI agents that understand far far beyond what’s going on within your app.

Look at how Google’s Android assistant is controlling the entire OS like a user would — MCP would do this much faster!

They will now all have access to a ridiculous amount of context from other apps and the OS itself.

Speaking of OSs we could finally have universal AI assistants that can SEE AND DO EVERYTHING.

We’re already seeing Google start to do this internally between Gemini and other Google apps like YouTube and Maps.

But now we’re talking every single app on your device that does anything — using any one of them to get data and perform actions autonomously as needed.

No longer the dumb trash we’ve been having that could only do basic stuff like set reminders or search the web — and can’t even understand what you’re telling it do at times.

Now you just tell your assistant, “Give me all my photos from my last holiday and send them to Sandra and ask her what she thinks” — and that’s that. You don’t need to open anything.

It will search Apple Photos and Google Photos and OneDrive and every photo app on your device.

It would resolve every ambiguity with simple questions — send them how — WhatsApp? Which Sandra?

We’ve been building apps that largely exist in their own little worlds. Sure they talk to APIs and maybe integrate with a few specific services. But seamless interaction with the entire operating system has been more of a dream than a reality.

MCP blows that wide open. Suddenly, our AI agents aren’t just confined to a chatbot window. They can access your file system, understand your active applications, and even interact with other services running on Windows. This isn’t just about making Copilot smarter; it’s about making your agents smarter, capable of far more complex and context-aware tasks.

Imagine an AI agent you build that can truly understand a user’s workflow. It sees they’re struggling with a task, understands the context from their open apps, and proactively suggests a solution or even takes action. No more isolated tools. No more jumping between applications just to get basic information.

Of course the security implications are massive. Giving AI this level of access requires extreme caution. Microsoft’s focus on secure proxies, tool-level authorization, and runtime isolation is crucial. Devs need to be acutely aware of these new attack surfaces and build with security as a paramount concern. “Trust, but verify” becomes even more critical when an AI can manipulate your system.

So, what would this mean for your SaaS app? Start thinking beyond API calls. Think of every other app or MCP source your user could have. Think of all the ways an AI agent could use the data from your app.

This is a clear signal that the future of software development involves building for an intelligent, interconnected environment. The era of the all-knowing all-powerful universal AI assistant isn’t a distant sci-fi fantasy; it’s being built, piece by piece, right now. And with MCP, we’ve just been handed a key component to help us build it. Let’s get to work.

This new AI tool from Google just destroyed web & UI designers

Wow this is absolutely massive.

The new Stitch tool from Google may have just completely ruined the careers of millions of web & UI designers — and it’s only just get started.

Just check out these stunning designs:

This is an absolute game changer for anyone who’s ever dreamed of building an app but felt intimidated by the whole design thing.

It’s a huge huge deal.

Just imagine you have a classic app idea — photo sharing app, workout app, todo-list whatever…

❌ Before:

You either hire a designer or spend hours wrestling with design software trying to create a pixel perfect UI.

Or maybe you even just try to wing it and hope for the best, making crucial design decisions on the fly as you develop the app.

✅ Now:

Just tell Stitch whatever the hell you’re thinking.

Literally just describe your app in plain English.

“A blue-themed photo-sharing app”:

Look how Stitch let me easily the design — adding likes for every photo:

Or, if you’ve got a rough sketch on a napkin, snap a pic and upload it. Stitch takes your input, whatever it is, and then — BOOM — it generates a visual design for your app’s user interface. It’s like having a personal UI designer at your fingertips.

But it doesn’t stop there. This is where it gets really cool. Stitch doesn’t just give you a pretty picture. It also spits out the actual HTML and CSS code that brings that design to life. Suddenly, your app concept isn’t just an idea; it’s a working prototype. How amazing is that?

Stitch is pretty smart too. It can give you different versions of your design, so you can pick the one you like best. You can also tweak things – change the colors, switch up the fonts, adjust the layout. It’s incredibly flexible. And if you want to make changes, just chat with Stitch. Tell it what you want to adjust, and it’ll make it happen. It’s a conversation, not a command line.

Behind all this magic are Google’s powerful AI models, Gemini 2.5 Pro and Gemini 2.5 Flash. These are the brains making sense of your ideas and turning them into designs and code. The whole process is surprisingly fast.

Who is this for, you ask? Well, it’s for everyone. If you’re a complete beginner with zero design or coding experience, Stitch is your new best friend. You can create professional-looking apps without breaking a sweat.

It’s a fantastic way to rapidly prototype ideas and get a head start on coding for seasoned developers.

Right now, Stitch is in public beta, available in 212 countries, though it only speaks English for now. And yes, you can use it for free, with a monthly limit on how many designs you can generate.

It’s a super-powered starting gun for your app development journey. It streamlines the early stages to get you from a raw idea to a tangible design and code much faster.

And if you still want more fine-grained control, you can always export your design to Figma.

So, if you’ve got an app idea bubbling in your mind, Google Stitch might just be the tool you’ve been waiting for to bring it to life.

Google just destroyed OpenAI and Sora without even trying

Woah this is completely insane.

Google’s new Veo 3 video generator completely blows OpenAI’s Sora out of the water.

This level of realism is absolutely stunning. This is going to destroy so many jobs…

And now it has audio — something Sora is totally clueless about.

Veo 3 videos come with sound effects, background ambient noises, and even character dialogue, all perfectly synced.

Imagine a comedian on stage, and you hear their voice, the audience laughing, and the subtle murmurs of the club – all generated by AI. This is just a massive massive leap forward.

Models like Sora can create clips from text prompts but you have to go through a whole separate process to add sound. That means extra time, extra tools, and often, less-than-perfect synchronization.

Veo 3 streamlines the entire creative process. You input your vision, and it handles both the sight and the sound.

This isn’t just about adding noise. Veo 3 understands the context. If you ask for a “storm at sea,” you’ll get the crashing waves, the creaking ship, and maybe even a dramatic voiceover, all perfectly woven into the visual narrative. It’s truly uncanny how realistic it feels.

Beyond the audio, Veo 3 also boasts incredible visual fidelity with videos up to 4K resolution with photorealistic details. It’s excellent at interpreting complex prompts, translating your detailed descriptions into stunning visuals.

You can even create videos using the aesthetic of an image — much more intuitive than having to describe the style in text.

And — this one is huge — you can reuse the same character across multiple videos and keep things consistent.

I’m sure you can see how big of a deal this is going be for things like movie production.

You can even dictate camera movements – pans, zooms, specific angles – and Veo 3 will try its best to execute them.

Google’s new AI-powered filmmaking app, Flow, integrates with Veo 3, offering an even more comprehensive environment for creative control. Think of it as a virtual production studio where you can manage your scenes, refine your shots, and bring your story to life.

Of course, such powerful technology comes with responsibility. Google is implementing safeguards like SynthID to watermark AI-generated content, helping to distinguish it from real footage. This is crucial as the lines between reality and AI-generated content continue to blur.

Right now, Veo 3 is rolling out to Google AI Pro and Ultra subscribers in select regions, with more to follow. It’s certainly a premium offering. However, its potential to democratize video creation is immense. From independent filmmakers to educators and marketers, this tool could transform how we tell stories.

Content creation and film production will never be the same with Veo 3.

And don’t forget, this is just version 3. Remember how ridiculously fast Midjourney evolved in just 2.

This is happening and there’s no going back.

The new Claude 4 coding model is an absolute game changer

Woah this is huge.

Anthropic just went nuclear on OpenAI and Google with their insane new Claude 4 model. The powerful response we’ve been waiting for.

Claude 4 literally just did all the coding by itself on a project for a full hour and a half — zero human assistance.

That’s right — 90 actual minutes of total hands-free autonomous coding genius with zero bugs. The progress is wild. They will never admit it but it’s looking like coding jobs well under serious attack right now. It’s happening.

How many of us can even code for 30 minutes straight at maximum capacity? Good, working code? lol…

This is just a massive massive leap forward for how we build software. These people are not here to joke around with anyone, let me just tell you.

Zero punches pulled. They’re explicitly calling Opus 4 the “world’s best coding model.” And based on their benchmark results, like outperforming GPT-4.1 on SWE-bench and Terminal-bench, they’ve got a strong case.

But what truly sets it apart Claude 4 handles complex, long-horizon coding tasks like the one in the demo.

We’re talking hours and hours of sustained focus. Imagine refactoring a massive codebase or building an entire full-stack application with an AI that doesn’t lose its train of thought. Traditional AI often struggles to maintain context over extended periods, but Claude 4 is designed to stay on target.

Another killer feature is its memory.

Give Claude 4 access to your local files, and it can create “memory files.” These files store crucial project details, coding patterns, and even your preferences. This means Claude remembers your project’s nuances across sessions, leading to more coherent and effective assistance. It’s like having a coding buddy who never forgets your project’s unique quirks.

And for those of us who dread debugging, Claude 4 is here to help.

It’s already looking incredibly good at finding even subtle issues, like memory leaks. An AI that not only writes clean, well-structured code but also sniffs out those pesky hidden bugs. That alone is worth its weight in gold.

Beyond individual tasks Claude 4 excels at parallel tool execution. It can use multiple tools simultaneously, speeding up complex workflows by calling on various APIs or plugins all at once. This means less waiting and more efficient development.

Anthropic is also putting a big big emphasis on integration. They’re building a complete “Claude Code” ecosystem. Think seamless integration with your favorite developer tools.

Claude Sonnet 4 is already powering a new coding agent in GitHub Copilot – Windsurf and Cursor will follow suit shortly.

Plus, new beta extensions are available for VS Code and JetBrains, allowing Claude’s proposed edits to appear directly inline. This isn’t just a separate tool; it’s becoming an integral part of your development environment.

They’ve even released a Claude Code SDK, letting you use the coding assistant directly in your terminal or even running in the background. And with the Files API and MCP Connector, it can access your code repositories and integrate with multiple external tools effortlessly.

Claude 4 isn’t just a new model — it’s a new era for software development.

This new AI coding agent from Google is unbelievable

Wow this is insane.

This new AI coding agent from Google is simply incredible. Google is getting dead serious about dev tooling. No more messing around.

Jules is a genius agent can understand your intent, plan out steps, and execute complex coding tasks without even trying.

A super-smart teammate who can tackle coding tasks on its own asynchronously to make software dev so much easier.

It works seamlessly in the background so you can focus on other important stuff.

Gemini 2.5 Pro

Jules is powered by Gemini 2.5 Pro, which is Google’s advanced AI model for complex tasks. This gives it serious brainpower for understanding code.

And you bet 2.5 Flash is on its way to give it even more insane speeds.

When you give Jules a task it clones your codebase into a secure virtual machine in the Google Cloud. This is like a private workspace where Jules can experiment safely without messing with your live code.

It then understands the full context of your project. This is crucial because it helps Jules make smart, relevant changes. It doesn’t just look at isolated bits of code; it sees the whole picture.

After it’s done, Jules shows you its plan, its reasoning for the changes, and a “diff” of what it changed. You get to review everything and approve it before it goes into your main project. It even creates pull requests for you on GitHub!

Jules is quite the multi-tasker. It can handle a variety of coding chores you might not enjoy doing yourself.

For example, it can write tests for your code, which is super important for quality. It can also build new features from scratch, helping you speed up development.

Bug fixing? Yep, Jules can do that too. It can even bump dependency versions, which can be a tedious and error-prone task.

One cool feature is its audio changelogs. Jules can give you spoken summaries of recent code changes, turning your project history into something you can simply listen to.

Google has made it clear that you’re always in charge. Jules doesn’t train on your private code, and your data stays isolated. You can review and modify Jules’s proposed plans at every step.

It works directly with GitHub, so it integrates seamlessly with your existing workflow. No need to learn a new platform or switch between different tools.

Jules is currently in public beta, and it’s free to use with some limits. This is a big step towards “agentic development,” where AI systems take on more responsibility in the software development process.

It might sound like Jules is coming for developer jobs, but that’s probably not the goal here — at least for now.

Jules is meant to be a powerful tool that frees up developers to focus on higher-level thinking, design, and more creative problem-solving. It’s about making you more productive and efficient.

So, if you’re a developer, now’s a great time to check out Jules. It could really change the way you work.

Google I/O was completely insane for developers

Google I/O yesterday was simply unbelievable.

AI from head to toe and front to back, insane new AI tools for everyone — Search AI, Android AI, XR AI, Gemini upgrades…

Developers were so so not left behind — a ridiculous amounts of updates across their developer products.

Insane coding agents, huge model updates, brand new IDE releases, crazy new AI tools and APIs…

Just insane.

Google sees developers as the architects of the future and this I/O 2025 definitely proved it.

The goal is simple: make building amazing AI applications even better. Let’s dive into some of the highlights.

Huge Gemini 2.5 Flash upgrades

The Gemini 2.5 Flash Preview is more powerful than ever.

This new version of their top model is super fast and efficient, with improved coding and complex reasoning.

They’ve also added “thought summaries” to their 2.5 models for better transparency and control, with “thinking budgets” coming soon to help you manage costs. Both Flash and Pro versions are in preview now in Google AI Studio and Vertex AI.

Exciting new models for every need

Google also rolled out a bunch of new models, giving developers more choices for their specific projects.

Gemma 3n

This is their latest open multimodal model, designed to run smoothly on your phones, laptops, and tablets. It handles audio, text, images, and video! You can check it out in Google AI Studio and with Google AI Edge today.

Gemini Diffusion

Get ready for speed! This new text model is incredibly fast, generating content five times quicker than their previous fastest model, while still matching its coding performance. If you’re interested, you can sign up for the waitlist.

Lyria RealTime

Imagine creating and performing music in real-time. This experimental model lets you do just that! It’s available through the Gemini API.

Beyond these, they also introduced specialized Gemma family variants:

MedGemma

This open model is designed for medical text and image understanding. It’s perfect for developers building healthcare applications, like analyzing medical images. It’s available now through Health AI Developer Foundations.

SignGemma

An upcoming open model that translates sign languages (like American Sign Language to English) into spoken language text.

This will help developers create amazing new apps for Deaf and Hard of Hearing users.

Fresh tools to make software dev so much easier

Google truly understands the developer workflow, and they’ve released some incredible tools to streamline the process.

A New, More Agentic Colab

Soon, Colab will be a fully “agentic” experience. You’ll just tell it what you want, and it will take action, fixing errors and transforming code to help you solve tough problems faster.

Gemini Code Assist

Good news! Their free AI-coding assistant, Gemini Code Assist for individuals, and their code review agent, Gemini Code Assist for GitHub, are now generally available. Gemini 2.5 powers Code Assist, and a massive 2 million token context window is coming for Standard and Enterprise developers.

Firebase Studio

The official unveiling after replacing Project IDX.

This new cloud-based AI workspace makes building full-stack AI apps much easier.

You can even bring Figma designs to life directly in Firebase Studio. Plus, it can now detect when your app needs a backend and set it up for you automatically.

Jules

Now available to everyone, Jules is an asynchronous coding agent. It handles all those small, annoying tasks you’d rather not do, like tackling bugs, managing multiple tasks, or even starting a new feature. Jules works directly with GitHub, clones your repository, and creates a pull request when it’s ready.

Stitch

This new AI-powered tool lets you generate high-quality UI designs and front-end code with simple language descriptions or image prompts. It’s lightning-fast for bringing ideas to life, letting you iterate on designs, adjust themes, and easily export to CSS/HTML or Figma.

Powering up with the Gemini API

The Gemini API also received significant updates, giving developers even more control and flexibility.

Google AI Studio updates

This is still the fastest way to start building with the Gemini API. It now leverages the cutting-edge Gemini 2.5 models and new generative media models like Imagen and Veo. Gemini 2.5 Pro is integrated into its native code editor for faster prototyping, and you can instantly generate web apps from text, image, or video prompts.

Native Audio Output & Live API

New Gemini 2.5 Flash models in preview include features like proactive video (detecting key events), proactive audio (ignoring irrelevant signals), and affective dialogue (responding to user tone). This is rolling out now!

Native Audio Dialogue

Developers can now preview new Gemini 2.5 Flash and 2.5 Pro text-to-speech (TTS) capabilities. This allows for sophisticated single and multi-speaker speech output, and you can precisely control voice style, accent, and pace for truly customized AI-generated audio.

Asynchronous Function Calling

This new feature lets you call longer-running functions or tools in the background without interrupting the main conversation flow.

Computer Use API

Now in the Gemini API for Trusted Testers, this feature lets developers build applications that can browse the web or use other software tools under your direction. It will roll out to more developers later this year.

URL Context

They’ve added support for a new experimental tool, URL context, which retrieves the full page context from URLs. This can be used alone or with other tools like Google Search.

Model Context Protocol (MCP) support

The Gemini API and SDK will now support MCP, making it easier for developers to use a wide range of open-source tools.

Google I/O 2025 truly delivered a wealth of new models, tools, and API updates.

It’s clear that Google is committed to empowering devs to build the next generation of AI applications.

The future is looking incredibly exciting!

OpenAI’s new Codex AI agent wants to kill IDEs forever

OpenAI’s new Codex AI agent is seriously revolutionary.

This is in a completely different league from agentic tools like Windsurf or Cursor.

Look how Codex effortlessly fixed several bugs in this project — completely autonomously.

37 good issues easily slashed away without the slightest bit of human intervention.

All this time we’ve been gushing over how great all these AI coding agents are with their powerful IDE integration and context-aware suggestions and multi-file edits.

Now here comes OpenAI Codex with something radically different. Not even close.

With Codex we might all eventually end up saying bye bye to IDEs altogether.

No more opening up your lovely VS Code to run complex command-line scripts or navigate files or modify code.

You simply tell an AI what you want to do to your system “Add a user authentication flow.” “Fix the bug in the payment gateway.” “”

And Codex would just do it.

This is the promise of OpenAI’s new Codex agent. It’s an AI so advanced, so capable, that it might just pave the way for a future where the traditional IDE becomes a relic.

The core idea is astonishingly simple: you describe the desired changes in natural language, and Codex does the rest.

It’s not just generating a snippet of code; it’s making comprehensive modifications to your entire system.

Codex lives in the cloud. You interact with it through a simple interface like a chat window. You’re giving it instructions, not manipulating files.

Think about it: the entire development process happens in a sandboxed, cloud environment.

Codex takes your instructions, loads your codebase into its secure workspace, makes the changes, runs tests, and even prepares pull requests. All of this without you ever needing to open a single file on your local machine.

This is the ultimate abstraction. The IDE, that familiar workbench where you meticulously craft every line, simply vanishes. The user interface for coding becomes your own language. Zero code.

For years, IDEs have been our central hub for development. They provide syntax highlighting, debugging tools, version control integration, and so much more. They’re indispensable. Or so we thought.

Codex challenges this fundamental assumption. If an AI can reliably understand complex instructions, perform multi-file edits, ensure code quality, and even integrate with your deployment pipeline, what’s the point of the traditional IDE? Its features become functionalities of the AI agent itself.

The workflow shifts dramatically. Instead of spending hours writing code, you’re now guiding an incredibly powerful AI. Your role evolves from a coder to a system architect, a high-level strategist. You define the “what,” and Codex figures out the “how.”

This isn’t about simply auto-completing your lines. It’s about delegating entire feature development cycles. Debugging? Codex can run tests and identify issues. Refactoring? Just tell it what structure you prefer.

While agents like Windsurf Cascade are about augmenting your current IDE experience, Codex is hinting at a future where that experience is entirely re-imagined. It’s a bold step towards a world where coding becomes less about the mechanics of typing and more about the articulation of intent.

Will IDEs truly die? Perhaps not entirely, not overnight. But the power and autonomy of agents like Codex suggest a future where the current development paradigm is fundamentally reshaped. Your keyboard might still be there, but your IDE might just be a whisper in the cloud.