Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

Vibe coding jobs are booming and programmers are in denial

It’s happening.

Vibe coding jobs are becoming a reality — and these aren’t little paying jobs either.

Look at this — a vibe coding job listing on Indeed paying up to $220,000 per year.

This is the real deal.

Many of you laughing at vibe coding and AI-assisted coding in general — now see for yourself.

More and more companies are seeing vibe coding as the real deal.

Of course many developers are still in denial.

Developers who keep ignoring AI are going to eventually find themselves left behind.

Because what vibe coding really means is that developers are finally being allowed to focus. No more spinning your wheels on boilerplate.

No more spending hours trying to scaffold out yet another UI flow that’s 90% the same as last week’s. No more pretending documentation is exciting.

AI does the boring stuff. You do the real work — thinking clearly, setting direction, maintaining the mission of the project.

The tools are here. Cursor. Windsurf. Claude Code. They let you shape a whole app without opening Stack Overflow once. You flow. You prompt. You build. You debug at the speed of thought.

And businesses are catching on. They’re not just hiring developers who can memorize syntax and write boilerplate from scratch. They’re hiring developers who can design systems, think fast, move faster — and collaborate with AI like it’s second nature.

That vibe coder you’re laughing at? They’re now outputting 10x the features per sprint, spending half the time sipping coffee and planning product-market fit, while the AI handles the grind.

It’s no longer “do you use AI when coding?” The question now is: how well do you prompt? How fluid is your interaction with a coding agent? Can you build and ship products faster than the average dev team — on your own?

We’re talking developers who can:

  • Build an MVP in a weekend using only natural language + refactors
  • Design UIs visually and prompt the backend logic to match
  • Jump between different stacks without having to “learn” them from scratch

They don’t memorize. They orchestrate. That’s the new skill set.

And vibe coding doesn’t kill the developer. It amplifies the real ones. The strategic thinkers. The system architects. The fast learners. The ones who know how to move ideas into product — not just line-by-line into code.

So yeah. If you’re laughing now you’ll probably be crying later.

The wave is already here. Some devs are riding it.

Others? Still stuck complaining in Reddit thread about how terrible AI is for “corrupting” software dev.

Good luck with that.

We’re building the future, with our minds. One prompt at a time.

These AI agent tricks drastically improve coding accuracy

AI coding agents are unbelievable as there are — but there are still tons of powerful techniques that will greatly maximize the value you get from them.

Use these tips to save you hours and drastically improve the accuracy and predictability of your coding agents.

1. Keep files short and modular

Too-long files are one of the biggest reasons for syntax errors from agent edits.

Break your code into small, self-contained files — like 200 lines. This helps the agent:

  • Grasp intent and logic quickly.
  • Avoid incorrect assumptions or side effects.
  • Produce accurate edits.

Short files also simplify reviews. When you can scan a diff in seconds, you catch mistakes before they reach production.

2. Customize the agent with system prompts

System prompts are crucial for guiding the AI’s behavior and ensuring it understands your intentions.

Before you even start coding, take the time to craft clear and concise system prompts.

Specify the desired coding style, architectural patterns, and any constraints or conventions your project follows.

Like for me I’m not a fan of how Windsurf likes generating code with comments — especially those verbose doc comments before a function.

So I’d set a system prompt like “don’t include any comments in your generated code”.

Or what if you use Yarn or PNPM in your JS projects? Coding agents typically prioritize npm by default.

So you add “always use Yarn for NPM package installations“.

On Windsurf you can set system prompts for Cascade with Global Rules in global_rules.md

3. Use MCP to drastically improve context and capability

Connect the agent to live project data—database schemas, documentation, API specs—via Model Context Protocol (MCP) servers. Grounded context reduces hallucinations and ensures generated changes fit your actual environment.

Without MCP integration, you’re missing serious performance gains. Give the agent all the context it needs to maximize accuracy and run actions on the various services across your system without you ever having to switch from your IDE.

4. Switch models when one fails

Different models can excel at different tasks.

If the agent repeats mistakes or gives off-base suggestions, try swapping models instead of endless retries.

A new model with the same prompt often yields fresh, better results.

Also a great tactic for overcoming stubborn errors.

5. Verify every change (to the line)

AI edits can look polished yet contain tiny changes you didn’t ask for — like undoing a recent change you made. Windsurf is especially fond of this.

Never accept changes blindly:

  • Review diffs thoroughly.
  • Run your test suite.
  • Inspect critical logic paths.

Even if Windsurf applies edits smoothly, validate them before merging. Your oversight transforms a powerful assistant into a safe collaborator.

6. “Reflect this change across the entire codebase”

Sometimes you tell the agent to make changes that can affect multiple files and projects — like renaming an API route in your server code that you use in your client code.

Telling it to “reflect the change you made across the entire codebase” is a powerful way to ensure that it does exactly that — making sure that every update that needs to happen from that change happens.

7. Revert, don’t retry

It’s tempting to try and “fix” the AI’s incorrect output by continually providing more context or slightly altering your prompt.

Or just saying “It still doesn’t work”.

But if an AI agent generates code that is fundamentally wrong or off-track, the most efficient approach is often to revert the changes entirely and rephrase your original prompt or approach the problem from a different angle.

Trying to incrementally correct a flawed AI output can lead to a tangled mess of half-baked solutions.

A clean slate and a fresh, precise prompt will almost always yield better results than iterative corrections.

AI coding agents are force multipliers—especially when you wield them with precision. Master these habits, and you’ll turn your agent from a novelty into a serious edge.

How AI massively upgrades every single step of software development

Software development is changing fast and AI is at the center of it.

Today’s best developers aren’t just writing code — they’re collaborating with intelligent agents to think, plan, and build differently.

There’s a shift happening in how apps are imagined, designed, and brought to life.

Developers who know how to work with AI can move faster, think bigger, and build smarter — from the very first spark of an idea all the way to a finished product.

Generate requirements

AI helps you systematically break down a high-level idea into concrete, manageable requirements.

Instead of getting stuck in a brainstorming loop, you can use an AI agent to act as a consultant, asking clarifying questions and suggesting a comprehensive list of features.

With this you identify potential edge cases and missing components early in the process.

What the AI can do

  • Generate a list of functional and non-functional requirements.
  • Ask clarifying questions to refine your initial idea.
  • Analyze your requirements for potential contradictions or missing details.
  • Suggest features based on industry standards and best practices.

Example prompt

  • “I want to build a social media app for fitness enthusiasts. What are the core features I’ll need for this app? Categorize them by user roles (e.g., individual user, admin).”
  • “Given the requirements for a real-time chat feature, what are the potential technical challenges and non-functional requirements I should consider, such as scalability and security?”
  • “Act as a product manager. I’ve defined the following features for my to-do list app: user authentication, task creation, task deletion, and task editing. What is missing? What other features would make this a more complete product?”

Design and plan

Once you have your requirements, an AI agent can help you with the architectural design, data modeling, and even UI/UX wireframing. It can suggest design patterns, database schemas, and user flows, acting as a virtual architect or designer.

What the AI can do

  • Propose an application architecture (e.g., monolithic, microservices).
  • Design a database schema based on your app’s features.
  • Generate user flow
  • Suggest UI components and design patterns for specific screens.

Example prompts

  • “Design a scalable and secure backend architecture for a real-time messaging app. It should support millions of concurrent users and handle high-volume data traffic.”
  • “Based on the following features for a food delivery app: user profiles, restaurant listings, order tracking, and payment processing, generate a detailed database schema using PostgreSQL.”
  • “Give me an UI flow for the user registration and login process of a mobile app. The flow should include screens for sign-up, email verification, password reset, and a successful login state.”
  • “Generate an example of a component for a social feed that uses the React framework. The component should display posts with images, likes, and comments.”

Prototype rapidly

When it’s time to build AI can rapidly create the foundational structure or “scaffolding” of your app.

This includes generating project directories, configuration files, and basic API endpoints, allowing you to start building on a solid foundation without the manual setup overhead.

What the AI can do

  • Generate a project structure for a specific framework (e.g., a Next.js app with a Tailwind CSS setup).
  • Create a basic API with CRUD (Create, Read, Update, Delete) endpoints.
  • Write configuration files for linters, formatters, and build tools.

Example prompts

  • “Create the file and folder structure for a full-stack e-commerce application using Next.js, Express.js, and MongoDB.”
  • “Generate a basic REST API in Python using Flask. The API should have endpoints for a ‘products’ resource, including /products (GET), /products/:id (GET), and /products (POST).”
  • “Set up a new React project in the current directory. Include the necessary dependencies for state management with Zustand and routing with React Router.”

Create and build

This is where AI shines as a pair programmer.

Go beyond simple functions and files — ask the AI to implement entire features at a high level and stay focused on the bigger picture.

What the AI can do

  • Write feature-complete code blocks based on a description.
  • Integrate different services or APIs.
  • Refactor existing code to improve performance or readability.
  • Generate test data for specific functions or features.

Example prompts

  • “Implement the ‘user registration’ feature. It should take a username and password, validate them, hash the password, and store the new user in the database. Use Express.js and Mongoose.”
  • “Create a ‘search functionality’ for a list of blog posts. The search should be case-insensitive and match against the post’s title and content. Write this using React with hooks.”
  • “Using the requests library in Python, write a function to fetch weather data from the OpenWeatherMap API for a given city and parse the JSON response.”

Testing

AI can automate and enhance your testing process by generating test cases, identifying potential bugs, and even creating entire test suites. This helps you catch errors early and ensures your code is robust.

What the AI can do

  • Generate unit tests for functions or components.
  • Create integration tests to verify interactions between different parts of your application.
  • Find edge cases and potential failure scenarios for your code.
  • Write end-to-end tests for critical user flows.

Example prompts

  • “Write Jest unit tests for the following calculateTax function. Include tests for positive cases, zero values, and invalid inputs like negative numbers or strings.”
  • “Generate integration tests for the user authentication flow to ensure that a new user can successfully register and log in.”
  • “Analyze the updateShoppingCart function for potential bugs or race conditions, and suggest test cases to expose them.”
  • “Write a Playwright test script to simulate a user adding an item to a cart and completing the checkout process on an e-commerce website.”

Code review

Even for flawless code, an AI can provide an objective second opinion. It can check for style consistency, potential security vulnerabilities, and opportunities for optimization that you might have missed.

What the AI can do

  • Check for adherence to specific style guides (e.g., Airbnb JavaScript Style Guide).
  • Identify code smells, anti-patterns, and potential performance bottlenecks.
  • Suggest refactoring to make the code cleaner and more efficient.
  • Detect common security vulnerabilities like SQL injection or cross-site scripting (XSS).

Example prompts

  • “Review the following code for any security vulnerabilities, especially in handling user input.”
  • “Analyze this file and suggest ways to simplify the logic and improve its readability. Use clear, concise language in your suggestions.”
  • “Perform a code review of this pull request. Check for adherence to SOLID principles and clean code practices. Also, provide a brief summary of the changes and your overall assessment.”
  • “Suggest a better name for the processData function and provide an explanation for why the new name is more appropriate.”

Documentation

One of the most tedious parts of coding, documentation, can be almost fully automated with AI. A well-prompted AI can generate a wide range of documentation, from in-line code comments to comprehensive API guides.

What the AI can do

  • Write clear and concise JSDoc comments for functions and classes.
  • Generate a README.md file for a project, including setup instructions and usage examples.
  • Create API documentation based on your codebase.
  • Summarize a complex code block or a project’s purpose in a non-technical way.

Example prompts

  • “Write JSDoc comments for the fetchUserData function, including a description of its purpose, parameters, and what it returns.”
  • “Generate a README.md file for a Python project that uses Flask and SQLAlchemy. The README should include an overview, installation instructions, how to run the app locally, and a basic API endpoint reference.”
  • “Act as a technical writer. Explain how the PaymentService class works in simple, clear terms for a new developer joining the team.”
  • “Generate a CHANGELOG.md for this project based on the git commit history since the last release.”

95% of developers keep ignoring these 5 AI coding superpowers

Many developers are still treating AI like a toy.

Especially the ones still scoffing at the idea of vibe coding and LLMs in general.

They’ll let it spit out some boilerplate or demo code, then go back to slogging through the hard stuff by hand.

They’re still stuck with the 2015 coding mindset.

Yet AI is already capable of shaving hours off the kinds of tasks that quietly eat your time every day.

The real edge comes when you stop thinking of it as a novelty and start using it as a persistent weapon in your workflow.

Developers who figure that out will move a lot faster than the rest.

These are 5 powerful ways to start using AI and coding agents to their maximum potential.

1. Write regex without the headaches

Regex is powerful but writing it by hand can be so painful. AI can:

  • Translate plain-English rules into a working regex.
  • Explain cryptic existing patterns.
  • Generate positive and negative test strings so you can double-check correctness.

Example prompt:

“Write a function in a new file with a regex that matches ISO-8601 timestamps ending in Z, show me 5 valid and 5 invalid examples.”

You can still verify the AI’s regex in a tool like regex101 to confirm it works across the engine you’re using.

2. Easily test APIs with curl and beyond

When you’re debugging APIs writing curl commands with the right flags can be tedious.

Even with Postman you still have to click here and there, entering the parameters, organizing and creating folders… (ugh)

But with coding agents you can:

  • Turn a plain description into a ready-to-run curl.
  • Translate curl into client code in Python, Node, Go, etc.
  • Add flags for retries, headers, or timing diagnostics.

Example prompt:

“Give me a curl command to POST JSON to /users, set an Authorization header, and print both response headers and timing stats.”

From there, you can ask the AI to convert that curl into production-ready code.

3. Rapidly scaffolding apps and webpages

Instead of Googling CLI flags or digging through docs you can let AI set up project starters for you:

  • Next.js with Tailwind and TypeScript.
  • Vite with React or Vue.
  • Pre-configured routes and components that compile on the first run.

Example prompt:

“Scaffold a Next.js app with TypeScript, ESLint, Tailwind, and create Home, About, and Blog pages with starter code.”

This gets you running instantly so you can focus on building features.

And of course with AI you can go beyond project starters and stock templates — with LLMs and free-form text your options are limitless.

Example prompt:

“Bootstrap a custom web app for me. It should be a Next.js project with TypeScript, Tailwind, and ESLint, it should include:
– A /dashboard route with a responsive sidebar and top nav.
– Authentication stubs (login, signup, logout flow) — using [auth service of your choice]
– A mock API for /todos with create/read/update/delete endpoints.
– Example unit tests (with Jest) for at least one component and one API handler.
– A README that explains setup, usage, and next steps.”

4. Write awesome READMEs and changelogs

AI is excellent at producing the boilerplate structure for docs:

  • README.md with install steps, usage, config, and contributing guidelines.
  • CHANGELOGs that follow “Keep a Changelog” and Semantic Versioning.
  • Quick-start snippets in multiple languages and operating systems.

Example prompt:

“Generate a README for this project with sections for Install, Quick Start, Usage, Config, FAQ, and Contributing.”

Then edit and polish to fit your repo’s voice.

In some repos I see how they automatically generate changelogs directly from the commits for their releases — often problematic as commits often don’t map cleanly to features.

But now with AI you just say something like:

Using my commit history, generate a CHANGELOG.md entry for version 1.3.0 fusing Keep a Changelog format. Group items under Added, Fixed, Changed, and Removed. Write in a clean, professional style.

5. Quick refactorings and inline edits

Modern AI coding tools like Windsurf, Cursor, or Copilot let you select code and simply say what you want changed:

  • Convert callbacks to async/await.
  • Converting a class to a hook — or a list of functions
  • Extract a function or interface.

Very similar to now-not-so-useful VS Code extensions like JavaScript Booster.

Example prompt (select code first):

“Inline edit: refactor this function to use async/await, add JSDoc types, and keep behavior identical.”

Preview diffs, run tests, and expand the scope only after you’re confident.

Google’s new AI tool moves us one step closer to the death of IDE

What if the next generation of developers never opens a code editor?

This is the serious case Google is making with their incredible new natural language coding tool, Opal.

This isn’t just another no-code tool.

This is a gamble on re-thinking the very idea of coding.

Opal is a visual playground where anyone can build AI-powered “mini apps” without writing a single line of code.

Launched under Google Labs in July 2025.

You tell it what you want—“summarize a document and draft an email reply”—and Opal responds with a working, visual flow. Inputs, model calls, outputs—all wired up. You can tweak steps via a graph interface or just… keep chatting.

It’s not an IDE. It’s not even a low-code tool. It’s something stranger:

A conversational, modular AI agent that builds, edits, and is the app.

A big deal

Traditional development tools like IDEs and terminals and frameworks were all built with the same mindset — humans write code to tell computers what to do.

But Opal says:
Humans describe outcomes. The AI figures out the how.

It’s the opposite of what we’ve spent decades optimizing for:

  • No syntax.
  • No debugging.
  • No deployment targets.

Just outcomes.

And it’s not alone. Google’s Jules can already work on your repo autonomously. Their Stitch tool can generates UIs from napkin sketches. Stitch + Jules + Opal = a future where the IDE becomes invisible.

This is also something similar we see in tools like OpenAI Codex to a lesser but important extent.

For the first time in the history of software development, we can make massive, significant changes to our codebase without having to ever open the IDE and touch code.

Opal vs IDEs

Where an IDE assumes:

  • You know the language
  • You own the repo
  • You debug with your brain
  • You ship and maintain your code

Opal assumes:

  • You don’t need to know how anything works
  • You want it running now
  • The AI handles the logic
  • The environment is the product

It’s the Uber of programming: you don’t need to build the car. You just say where you want to go.

Confidence

The tradeoff:

  • Opal is fast but opaque.
  • Code is slow but transparent.

There’s no Git. No static typing. No test suite. You’re trusting the AI to do the right thing—and if it breaks, you might not even know why.

But that’s the point. This isn’t supposed to do what an IDE does.
This is supposed to make you forget you ever needed one.

For this to ever happen — we have to have an incredibly high level of confidence — that the changes being made are exactly what we specify and every ambiguity is accounted for.

Will code become obsolete?

Not yet. Not for large systems. Not for fine-tuned control.

But Opal shows us a real possibility:

  • That small tools can be spoken into existence.
  • That AI will eat the scaffolding, the glue code, the boring parts.
  • That someday, even real products might be built in layers of AI-on-AI—no React, no Docker, no IDE in sight.

It’s not just no-code. It’s post-code.

Welcome to the brave new world. Google’s already building it.

Google’s new coding agent just got even more insane

Wow. This just got even more insane.

Google has officially taken Jules out of beta and unleashed an absolute coding beast — something many people are calling Jules 2.0.

The new Jules is a huge evolution in what agentic development can look like — combining planning, sophisticated autonomy, and serious engineering maturity in one system.

Google is clearly getting dead serious about developer tooling. No more wishy-washy experiments. No more half-steps. Jules is here to stay.

Your AI teammate, smarter than ever

Jules is still that genius AI coding agent that can understand your intent, plan steps, and execute complex coding tasks — all asynchronously, like a super-smart teammate working in the background.

But with this update Jules is faster, more capable, and more reliable.

It uses Gemini 2.5 Pro for reasoning and planning — serious brainpower for tough coding problems. And with Gemini 2.5 Flash on the horizon, even crazier speeds are coming.

The brilliant features in Jules 2.0

Critic-augmented generation

Before showing you results, Jules now reviews its own work with a built-in critic.

If the critic finds issues, Jules fixes them before you even see the code. Think of it as a safety net built right in.

GitHub issues integration

You can assign Jules tasks straight from GitHub Issues. It turns tickets into working pull requests — automatically.

Reusable setups

Reruns are faster than ever because Jules remembers past contexts, so repeat workflows don’t have to start from scratch.

Multimodal support

Not just code anymore — Jules can handle a wider range of inputs and contexts.

Audio changelogs

Jules can tell you what changed, so you can keep up just by listening.

How Jules works

When you give Jules a task, it clones your repo into a secure Google Cloud VM, isolated from your live code. There, it experiments safely with full context of your project.

It then:

  1. Plans the steps.
  2. Explains its reasoning.
  3. Generates a “diff” of changes.
  4. Opens a pull request on GitHub for you.

You stay in full control — reviewing, approving, or modifying its work before merging.

Jules shines at all the boring but critical coding chores:

  • Writing tests
  • Fixing bugs
  • Adding features
  • Bumping dependencies
  • Updating docs

Availability & plans

Jules is now generally available worldwide. No waitlist. No invite-only beta.

  • Free Plan: 15 tasks/day, 3 concurrent runs.
  • Pro Plan: 100 tasks/day, 15 concurrent runs.
  • Ultra Plan: 300 tasks/day, 60 concurrent runs.

This scaling makes it usable for everyone from hobbyists to full-on teams.

The bigger picture

This isn’t about replacing developers — at least for now.

Google is framing Jules as a force multiplier: freeing you from grunt work so you can focus on architecture, creativity, and real problem-solving.

Jules is less about taking jobs, more about changing the workflow: you offload the boring, repetitive stuff to an AI agent that works in the background, while you design the future.

Final thoughts

Jules was already mind-blowing in beta.

But with these new updates and the general release, Google just moved the conversation forward to push the boundaries of agentic development.

If you’re a developer this isn’t optional anymore. It’s the start of a new way of building.

Check it out, experiment, and get ready — because coding just changed again.

Why you *need* to be a devpreneur in 2025

Becoming a devpreneur — a developer who turns ideas into real-world solutions — isn’t just a career path. It’s a calling. It’s how you combine your creativity, your values, and your technical skills into something that leaves a mark.

This isn’t about chasing unicorn valuations or “escaping the 9–5.” It’s about building things that matter.

1. The world needs builders, not just commentators

The internet is flooded with opinions, outrage, and takes — but what we truly need are creators who can solve problems, not just talk about them.

As a devpreneur, you’re not waiting for institutions, governments, or corporations to catch up.
You’re saying: “Here’s a problem — I’m going to fix it.”

  • Climate? Build tools for sustainability.
  • Mental health? Create spaces for healing.
  • Education? Reimagine learning accessibly and globally.
  • Democracy? Strengthen participation, truth, and transparency.

Impact doesn’t scale through opinions. It scales through products.

2. You can go from idea → action → change

In 2025, the tools are powerful — but accessible:

  • AI lets you work faster, smarter, and more independently.
  • Open-source projects and APIs are ready to plug into.
  • You can launch globally from your laptop, within days.

This means you can take an insight, build something useful, and ship it to the world — without waiting for funding, permission, or a cofounder.

When you’re a devpreneur, you don’t just think.
You build. You ship. You impact.

3. You become a force of directed intelligence

Technical skills are a form of modern-day superpower.
But most people waste them inside corporate silos, patching bugs, running sprints, and shipping features they don’t believe in.

As a devpreneur:

  • You aim that power toward your own values.
  • You work on problems you understand deeply.
  • You choose meaning over maintenance.

Instead of fueling someone else’s ad engine or algorithm, you’re building tools that empower, educate, connect, or heal.

The world is shaped by those who build it.

4. You lead with values, not just features

Big companies build for scale. Devpreneurs can build for soul.

You can:

  • Build tools that foster community, not just engagement.
  • Optimize for user dignity, not just clicks or conversions.
  • Create tech that heals attention spans, uplifts mental health, or protects privacy.

You can be idealistic and practical. Visionary and functional.

In a world overwhelmed by extractive tech, you offer a different kind of software:
Tech with conscience. Tools with heart.

5. You grow into the person who can do even more

Building and shipping your own tools forces you to grow:

  • As a thinker, because you must simplify what matters.
  • As a designer, because you must make ideas usable.
  • As a communicator, because you must inspire and educate.
  • As a human, because you’ll meet resistance, and overcome it.

Being a devpreneur is an act of self-evolution — the person you become in the process is more capable of shaping the future than the one who started.

6. The time is now — and the stakes are high

2025 is not a neutral year. The world faces crises and opportunities on a planetary scale. From climate change to digital addiction, from misinformation to loneliness — the fabric of society is shifting.

That means your skills aren’t neutral either.

You can use them to build more ad tech, more clickbait, more distraction.
Or you can build solutions, movements, and futures that matter.

And the good news?
You don’t need permission.
You just need purpose — and the courage to start.

Build the tool. Ship the idea. Create the ripple.

The world is waiting.

This IDE just got a massive AI upgrade

This is incredible.

Windsurf just pushed several amazing upgrades to their IDE with their new Wave 12 update.

They recently got acquired by the company behind Devin — the incredible coding agent that called itself, “The first AI software engineer”.

And now with Wave 12, they’ve brought several features from Devin that made it so powerful and popular, to Windsurf.

These features will help us build much faster, making greater impacts with our codebase and focusing on what matters.

Less time spent on repetitive, mundane, low-level changes, and more on high-level thinking and design.

Incredible new “DeepWiki”

This amazing new feature helps you understand unfamiliar parts of a codebase much easier and faster.

Ever hovered over a function and thought, what the hell does this thing actually do?

Now when you Cmd/Ctrl + Shift + Click on any symbol, Windsurf brings up DeepWiki — an AI-generated breakdown of what that symbol is, how it fits into the bigger picture, and even a summary of its behavior.

But it doesn’t end there: you can then feed that explanation directly into Cascade, so the agent understands your code at the same level you now do. It’s like giving your copilot context without typing a word.

Everything is connected.

Will make it much easier to work on a new codebase and collaborate with others.

Vibe & replace: find-and-replace on steroids

A new intelligent feature to make effortless changes to various parts of your codebase.

This one’s for when you need to rename a method or tweak an API call or or apply a pattern across your whole codebase — but you don’t want to end up breaking everything.

With Vibe & Replace you search for a term, like fetchUser, and then write a prompt describing how you want each instance updated — for example “rename to getUserById and update its arguments”.

Windsurf handles the heavy lifting, match by match. You pick between:

  • Smart mode — more thoughtful, safer.
  • or Fast mode — quicker, more aggressive.

Either way it’s regex with superpowers.

Easily make tweaks here and there and stay focused on the big picture.

Cascade upgrades: less clicking, more thinking

Cascade — Windsurf’s AI agent workspace — now has auto-planning baked in. No more manually toggling between modes. You give it a goal, it figures out a plan, and you review or edit that plan before it touches your code.

It also got better at working with long contexts, and the tools are snappier and more precise.

Dev containers + remote SSH

Invaluable new feature when working in containers and remote servers

Wave 12 adds support for Dev Containers over SSH. That means you can open a repo hosted on a remote machine inside a dev container and still use Windsurf like normal.

It basically brings the “works on my machine” experience to any machine.

Smarter tab autocomplete

Tab completion now feels way more alive. The suggestions are faster and smarter, especially when you’re working across multiple files. If you weren’t using it before, you’ll probably start now.

So much more

You’ll notice the new look right away — a cleaner, more focused UI across chat, Cascade, and the home panels.

Under the hood, they packed in over 100 fixes and performance upgrades. Stuff feels smoother and snappier all around.

Quickstart — try these:

  • DeepWiki: Hover any symbol → Cmd/Ctrl + Shift + Click → read, then click “Add to Cascade” to give it context.
  • Vibe & Replace: Search for something like foo(), then prompt it with “replace with bar() and update its args.”
  • Dev Containers: If you work remote, try “Reopen in Container” via the command palette — now works over SSH too.

With every Wave Windsurf moves closer to being a real engineering assistant that let’s us build much more than we we’ve ever been capable of.

Wave 12 isn’t just another version bump. It’s a real shift toward a more intelligent, less frustrating coding experience.

This major IDE just got an amazing new coding CLI

Wow this is amazing.

Cursor just launched a powerful new CLI tool that brings its coding agent directly to your terminal— use in ANY IDE.

Get AI-powered assistance into any environment: shells, JetBrains, containers, CI pipelines.

This isn’t just a helper to launch the Cursor editor from the terminal like the old cursor command.

It’s a full-fledged, headless agent that can read and edit files, understand multi-file context, and even run shell commands—with your approval.

Interactive and print

The CLI operates in two distinct modes:

Interactive mode is like chatting with the agent in real time. You give it a goal, and it responds by showing code changes or proposing terminal commands. You can review each step, approve changes, and iterate—all from the terminal.

Print mode (non-interactive) is designed for automation.

It lets you run single-shot prompts and return output in text, json, or stream-json formats—perfect for scripts, pipelines, or CI jobs that need structured results.

You can switch modes using flags like -p and --output-format.

Session control and customization

The CLI isn’t just one-shot.

You can resume previous sessions with cursor-agent resume, list old chats with cursor-agent ls, and keep long-term context across sessions.

It also supports slash commands like /model, /resume, and /quit, and you can specify which model to use (like the new gpt-5) with the -m flag. This matches the model flexibility from the GUI version of Cursor.

Secure file and command access

In interactive mode, the agent can propose terminal commands—but you must approve them before execution. This gives you peace of mind and full control. In print mode (used in CI), the agent has write access, so it’s recommended only for trusted environments.

Authentication can be done through a browser flow (cursor-agent login) or with an API key (CURSOR_API_KEY) for headless usage in CI.

MCP support. Of course

The CLI integrates smoothly with Cursor’s project rules, including .cursor/rules, AGENTS.md, and mcp.json if you’re using the Model Context Protocol (MCP).

This lets you define tool access, coding guidelines, and resources just once and use them across both the GUI and CLI.

This also means you can define workflows and plug your agent into real APIs, databases, or file systems—enabling powerful, real-world automation.

Easy installation and setup

Installation is one-liner simple:

Shell
curl https://cursor.com/install -fsS | bash cursor-agent login # Start CLI with a prompt cursor-agent chat "find one bug and fix it"

From there, you can start prompting, refactoring code, or reviewing pull requests—straight from your terminal.

Cursor’s new CLI turns its agent into a portable, flexible coding assistant that you can drop into any part of your workflow.

Whether you’re working in a minimalist terminal setup or building automation in CI or pairing it with your favorite editor this CLI opens the door to powerful, context-aware, headless development.

It’s currently in beta, so expect ongoing improvements—but if you want to bring intelligent, GPT-5-level coding into your daily terminal flow, this is the tool to try.

5 powerful MCP servers to make the most of GPT-5

GPT-5 is out now and AI agents are smarter than ever — but real power comes when you connect them to real-world tools.

That’s where the Model Context Protocol (MCP) comes in. It lets GPT-5 talk to external services like your databases and repos — not just to understand them, but to use them.

Whether you’re building your own agent or using GPT-5 through tools like Claude, Cursor, or Copilot, these 5 ready-to-use MCP servers instantly upgrade your AI’s capabilities.

1. Firebase MCP server

If you’re already using Firebase, this server lets GPT-5 interact with your project just like you would from the CLI. It’s perfect for app automation, debugging, and user management — all from natural language.

Key features

  • Instantly bootstrap Firebase projects and apps
  • Manage Auth: list users, set custom claims, enable/disable accounts
  • Query Firestore and validate security rules
  • Send push notifications via FCM
  • Supports Data Connect (GraphQL), Remote Config, and Crashlytics

Powerful use cases

  • GPT-5 can spin up a full Firebase app with Firestore, Auth, and hosting
  • Automatically summarize recent Auth activity and flag risky behavior
  • Simulate rule access and debug failed reads/writes
  • Analyze crash logs and propose fixes directly from Crashlytics data

Get Firebase MCP Server: LINK

2. GitHub MCP Server

This official GitHub server is built to give agents full visibility into your repos. It’s great for AI-assisted issue triage, PR generation, and CI/CD diagnostics.

Key features

  • Local or hosted (OAuth) mode
  • Granular control: enable only repos, issues, pull requests, etc.
  • Read-only toggle for safe use
  • Supports Dependabot, CodeQL, GitHub Actions workflows
  • Plays well with Claude, Cursor, and other tools

Powerful use cases

  • GPT-5 can scan issues across multiple repos and suggest the highest-impact fixes
  • Generate summaries of PR conversations and flag unresolved feedback
  • Automatically file issues for flaky tests or broken workflows
  • Navigate GitHub Actions runs to debug CI/CD failures

Get GitHub MCP Server: LINK

3. Notion MCP

This hosted server lets GPT-5 act on your Notion workspace — fetching, editing, creating, and organizing content across pages and databases.

Key features

  • One-click OAuth setup for ChatGPT, Claude, Cursor, etc.
  • Unified search across Notion, Slack, Google Drive, and Jira
  • Page and database read/write support
  • Supports Streamable HTTP or stdio (via mcp-remote)
  • Great for knowledge management and workflows

Powerful use cases

  • GPT-5 can summarize meeting notes, create task trackers, or plan product launches
  • Automatically organize messy pages into structured databases
  • Generate content drafts based on workspace context
  • Answer questions using data from across Notion, Slack, and Google Docs

Get Notion MCP: LINK

4. DevDb MCP

DevDb turns your local databases into something GPT-5 can explore and reason about — without needing manual SQL writing or schema diving.

Key features

  • MCP server built into DevDb VS Code extension — also works for Cursor and Windsurf
  • Auto-detects common frameworks like Laravel, Django, and Rails
  • Supports Postgres, MySQL, SQLite, and SQL Server
  • One-click JSON config for agent setup
  • GUI tools for table editing and inspection

Powerful use cases

  • GPT-5 can inspect schema and generate queries from natural language
  • Document relationships and suggest schema changes
  • Auto-generate database migrations or seed data
  • Explore foreign key chains and infer business logic

Get DevDb MCP Server: LINK

5. Sentry MCP

This server connects GPT-5 to real-time error and performance data from Sentry — and lets it go beyond monitoring into automated debugging and even code generation.

Key features

  • OAuth setup with hosted Streamable HTTP or SSE fallback
  • Explore issues, events, traces, and performance regressions
  • View project/org metadata and DSNs
  • Integrated Seer tool for root-cause analysis and autofix
  • Supports local self-hosted Sentry too

Powerful use cases

  • GPT-5 can analyze top crashes, trace them to root causes, and suggest fixes
  • Automatically file detailed bug reports linked to Sentry errors
  • Monitor app performance and alert when KPIs regress
  • Use Seer to run auto-diagnosis and push PRs

GPT-5 is out in the wild now and these MCP servers take its raw intelligence and plug it into real-world workflows.

From spinning up Firebase apps to analyzing crashes in Sentry or navigating your entire Notion workspace — this is the future of AI: not just being intelligent, but now making things happen at massive scale in the real world.