Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

95% of developers keep ignoring these 5 AI coding superpowers

Many developers are still treating AI like a toy.

Especially the ones still scoffing at the idea of vibe coding and LLMs in general.

They’ll let it spit out some boilerplate or demo code, then go back to slogging through the hard stuff by hand.

They’re still stuck with the 2015 coding mindset.

Yet AI is already capable of shaving hours off the kinds of tasks that quietly eat your time every day.

The real edge comes when you stop thinking of it as a novelty and start using it as a persistent weapon in your workflow.

Developers who figure that out will move a lot faster than the rest.

These are 5 powerful ways to start using AI and coding agents to their maximum potential.

1. Write regex without the headaches

Regex is powerful but writing it by hand can be so painful. AI can:

  • Translate plain-English rules into a working regex.
  • Explain cryptic existing patterns.
  • Generate positive and negative test strings so you can double-check correctness.

Example prompt:

“Write a function in a new file with a regex that matches ISO-8601 timestamps ending in Z, show me 5 valid and 5 invalid examples.”

You can still verify the AI’s regex in a tool like regex101 to confirm it works across the engine you’re using.

2. Easily test APIs with curl and beyond

When you’re debugging APIs writing curl commands with the right flags can be tedious.

Even with Postman you still have to click here and there, entering the parameters, organizing and creating folders… (ugh)

But with coding agents you can:

  • Turn a plain description into a ready-to-run curl.
  • Translate curl into client code in Python, Node, Go, etc.
  • Add flags for retries, headers, or timing diagnostics.

Example prompt:

“Give me a curl command to POST JSON to /users, set an Authorization header, and print both response headers and timing stats.”

From there, you can ask the AI to convert that curl into production-ready code.

3. Rapidly scaffolding apps and webpages

Instead of Googling CLI flags or digging through docs you can let AI set up project starters for you:

  • Next.js with Tailwind and TypeScript.
  • Vite with React or Vue.
  • Pre-configured routes and components that compile on the first run.

Example prompt:

“Scaffold a Next.js app with TypeScript, ESLint, Tailwind, and create Home, About, and Blog pages with starter code.”

This gets you running instantly so you can focus on building features.

And of course with AI you can go beyond project starters and stock templates — with LLMs and free-form text your options are limitless.

Example prompt:

“Bootstrap a custom web app for me. It should be a Next.js project with TypeScript, Tailwind, and ESLint, it should include:
– A /dashboard route with a responsive sidebar and top nav.
– Authentication stubs (login, signup, logout flow) — using [auth service of your choice]
– A mock API for /todos with create/read/update/delete endpoints.
– Example unit tests (with Jest) for at least one component and one API handler.
– A README that explains setup, usage, and next steps.”

4. Write awesome READMEs and changelogs

AI is excellent at producing the boilerplate structure for docs:

  • README.md with install steps, usage, config, and contributing guidelines.
  • CHANGELOGs that follow “Keep a Changelog” and Semantic Versioning.
  • Quick-start snippets in multiple languages and operating systems.

Example prompt:

“Generate a README for this project with sections for Install, Quick Start, Usage, Config, FAQ, and Contributing.”

Then edit and polish to fit your repo’s voice.

In some repos I see how they automatically generate changelogs directly from the commits for their releases — often problematic as commits often don’t map cleanly to features.

But now with AI you just say something like:

Using my commit history, generate a CHANGELOG.md entry for version 1.3.0 fusing Keep a Changelog format. Group items under Added, Fixed, Changed, and Removed. Write in a clean, professional style.

5. Quick refactorings and inline edits

Modern AI coding tools like Windsurf, Cursor, or Copilot let you select code and simply say what you want changed:

  • Convert callbacks to async/await.
  • Converting a class to a hook — or a list of functions
  • Extract a function or interface.

Very similar to now-not-so-useful VS Code extensions like JavaScript Booster.

Example prompt (select code first):

“Inline edit: refactor this function to use async/await, add JSDoc types, and keep behavior identical.”

Preview diffs, run tests, and expand the scope only after you’re confident.

Google’s new AI tool moves us one step closer to the death of IDE

What if the next generation of developers never opens a code editor?

This is the serious case Google is making with their incredible new natural language coding tool, Opal.

This isn’t just another no-code tool.

This is a gamble on re-thinking the very idea of coding.

Opal is a visual playground where anyone can build AI-powered “mini apps” without writing a single line of code.

Launched under Google Labs in July 2025.

You tell it what you want—“summarize a document and draft an email reply”—and Opal responds with a working, visual flow. Inputs, model calls, outputs—all wired up. You can tweak steps via a graph interface or just… keep chatting.

It’s not an IDE. It’s not even a low-code tool. It’s something stranger:

A conversational, modular AI agent that builds, edits, and is the app.

A big deal

Traditional development tools like IDEs and terminals and frameworks were all built with the same mindset — humans write code to tell computers what to do.

But Opal says:
Humans describe outcomes. The AI figures out the how.

It’s the opposite of what we’ve spent decades optimizing for:

  • No syntax.
  • No debugging.
  • No deployment targets.

Just outcomes.

And it’s not alone. Google’s Jules can already work on your repo autonomously. Their Stitch tool can generates UIs from napkin sketches. Stitch + Jules + Opal = a future where the IDE becomes invisible.

This is also something similar we see in tools like OpenAI Codex to a lesser but important extent.

For the first time in the history of software development, we can make massive, significant changes to our codebase without having to ever open the IDE and touch code.

Opal vs IDEs

Where an IDE assumes:

  • You know the language
  • You own the repo
  • You debug with your brain
  • You ship and maintain your code

Opal assumes:

  • You don’t need to know how anything works
  • You want it running now
  • The AI handles the logic
  • The environment is the product

It’s the Uber of programming: you don’t need to build the car. You just say where you want to go.

Confidence

The tradeoff:

  • Opal is fast but opaque.
  • Code is slow but transparent.

There’s no Git. No static typing. No test suite. You’re trusting the AI to do the right thing—and if it breaks, you might not even know why.

But that’s the point. This isn’t supposed to do what an IDE does.
This is supposed to make you forget you ever needed one.

For this to ever happen — we have to have an incredibly high level of confidence — that the changes being made are exactly what we specify and every ambiguity is accounted for.

Will code become obsolete?

Not yet. Not for large systems. Not for fine-tuned control.

But Opal shows us a real possibility:

  • That small tools can be spoken into existence.
  • That AI will eat the scaffolding, the glue code, the boring parts.
  • That someday, even real products might be built in layers of AI-on-AI—no React, no Docker, no IDE in sight.

It’s not just no-code. It’s post-code.

Welcome to the brave new world. Google’s already building it.

Google’s new coding agent just got even more insane

Wow. This just got even more insane.

Google has officially taken Jules out of beta and unleashed an absolute coding beast — something many people are calling Jules 2.0.

The new Jules is a huge evolution in what agentic development can look like — combining planning, sophisticated autonomy, and serious engineering maturity in one system.

Google is clearly getting dead serious about developer tooling. No more wishy-washy experiments. No more half-steps. Jules is here to stay.

Your AI teammate, smarter than ever

Jules is still that genius AI coding agent that can understand your intent, plan steps, and execute complex coding tasks — all asynchronously, like a super-smart teammate working in the background.

But with this update Jules is faster, more capable, and more reliable.

It uses Gemini 2.5 Pro for reasoning and planning — serious brainpower for tough coding problems. And with Gemini 2.5 Flash on the horizon, even crazier speeds are coming.

The brilliant features in Jules 2.0

Critic-augmented generation

Before showing you results, Jules now reviews its own work with a built-in critic.

If the critic finds issues, Jules fixes them before you even see the code. Think of it as a safety net built right in.

GitHub issues integration

You can assign Jules tasks straight from GitHub Issues. It turns tickets into working pull requests — automatically.

Reusable setups

Reruns are faster than ever because Jules remembers past contexts, so repeat workflows don’t have to start from scratch.

Multimodal support

Not just code anymore — Jules can handle a wider range of inputs and contexts.

Audio changelogs

Jules can tell you what changed, so you can keep up just by listening.

How Jules works

When you give Jules a task, it clones your repo into a secure Google Cloud VM, isolated from your live code. There, it experiments safely with full context of your project.

It then:

  1. Plans the steps.
  2. Explains its reasoning.
  3. Generates a “diff” of changes.
  4. Opens a pull request on GitHub for you.

You stay in full control — reviewing, approving, or modifying its work before merging.

Jules shines at all the boring but critical coding chores:

  • Writing tests
  • Fixing bugs
  • Adding features
  • Bumping dependencies
  • Updating docs

Availability & plans

Jules is now generally available worldwide. No waitlist. No invite-only beta.

  • Free Plan: 15 tasks/day, 3 concurrent runs.
  • Pro Plan: 100 tasks/day, 15 concurrent runs.
  • Ultra Plan: 300 tasks/day, 60 concurrent runs.

This scaling makes it usable for everyone from hobbyists to full-on teams.

The bigger picture

This isn’t about replacing developers — at least for now.

Google is framing Jules as a force multiplier: freeing you from grunt work so you can focus on architecture, creativity, and real problem-solving.

Jules is less about taking jobs, more about changing the workflow: you offload the boring, repetitive stuff to an AI agent that works in the background, while you design the future.

Final thoughts

Jules was already mind-blowing in beta.

But with these new updates and the general release, Google just moved the conversation forward to push the boundaries of agentic development.

If you’re a developer this isn’t optional anymore. It’s the start of a new way of building.

Check it out, experiment, and get ready — because coding just changed again.

Why you *need* to be a devpreneur in 2025

Becoming a devpreneur — a developer who turns ideas into real-world solutions — isn’t just a career path. It’s a calling. It’s how you combine your creativity, your values, and your technical skills into something that leaves a mark.

This isn’t about chasing unicorn valuations or “escaping the 9–5.” It’s about building things that matter.

1. The world needs builders, not just commentators

The internet is flooded with opinions, outrage, and takes — but what we truly need are creators who can solve problems, not just talk about them.

As a devpreneur, you’re not waiting for institutions, governments, or corporations to catch up.
You’re saying: “Here’s a problem — I’m going to fix it.”

  • Climate? Build tools for sustainability.
  • Mental health? Create spaces for healing.
  • Education? Reimagine learning accessibly and globally.
  • Democracy? Strengthen participation, truth, and transparency.

Impact doesn’t scale through opinions. It scales through products.

2. You can go from idea → action → change

In 2025, the tools are powerful — but accessible:

  • AI lets you work faster, smarter, and more independently.
  • Open-source projects and APIs are ready to plug into.
  • You can launch globally from your laptop, within days.

This means you can take an insight, build something useful, and ship it to the world — without waiting for funding, permission, or a cofounder.

When you’re a devpreneur, you don’t just think.
You build. You ship. You impact.

3. You become a force of directed intelligence

Technical skills are a form of modern-day superpower.
But most people waste them inside corporate silos, patching bugs, running sprints, and shipping features they don’t believe in.

As a devpreneur:

  • You aim that power toward your own values.
  • You work on problems you understand deeply.
  • You choose meaning over maintenance.

Instead of fueling someone else’s ad engine or algorithm, you’re building tools that empower, educate, connect, or heal.

The world is shaped by those who build it.

4. You lead with values, not just features

Big companies build for scale. Devpreneurs can build for soul.

You can:

  • Build tools that foster community, not just engagement.
  • Optimize for user dignity, not just clicks or conversions.
  • Create tech that heals attention spans, uplifts mental health, or protects privacy.

You can be idealistic and practical. Visionary and functional.

In a world overwhelmed by extractive tech, you offer a different kind of software:
Tech with conscience. Tools with heart.

5. You grow into the person who can do even more

Building and shipping your own tools forces you to grow:

  • As a thinker, because you must simplify what matters.
  • As a designer, because you must make ideas usable.
  • As a communicator, because you must inspire and educate.
  • As a human, because you’ll meet resistance, and overcome it.

Being a devpreneur is an act of self-evolution — the person you become in the process is more capable of shaping the future than the one who started.

6. The time is now — and the stakes are high

2025 is not a neutral year. The world faces crises and opportunities on a planetary scale. From climate change to digital addiction, from misinformation to loneliness — the fabric of society is shifting.

That means your skills aren’t neutral either.

You can use them to build more ad tech, more clickbait, more distraction.
Or you can build solutions, movements, and futures that matter.

And the good news?
You don’t need permission.
You just need purpose — and the courage to start.

Build the tool. Ship the idea. Create the ripple.

The world is waiting.

This IDE just got a massive AI upgrade

This is incredible.

Windsurf just pushed several amazing upgrades to their IDE with their new Wave 12 update.

They recently got acquired by the company behind Devin — the incredible coding agent that called itself, “The first AI software engineer”.

And now with Wave 12, they’ve brought several features from Devin that made it so powerful and popular, to Windsurf.

These features will help us build much faster, making greater impacts with our codebase and focusing on what matters.

Less time spent on repetitive, mundane, low-level changes, and more on high-level thinking and design.

Incredible new “DeepWiki”

This amazing new feature helps you understand unfamiliar parts of a codebase much easier and faster.

Ever hovered over a function and thought, what the hell does this thing actually do?

Now when you Cmd/Ctrl + Shift + Click on any symbol, Windsurf brings up DeepWiki — an AI-generated breakdown of what that symbol is, how it fits into the bigger picture, and even a summary of its behavior.

But it doesn’t end there: you can then feed that explanation directly into Cascade, so the agent understands your code at the same level you now do. It’s like giving your copilot context without typing a word.

Everything is connected.

Will make it much easier to work on a new codebase and collaborate with others.

Vibe & replace: find-and-replace on steroids

A new intelligent feature to make effortless changes to various parts of your codebase.

This one’s for when you need to rename a method or tweak an API call or or apply a pattern across your whole codebase — but you don’t want to end up breaking everything.

With Vibe & Replace you search for a term, like fetchUser, and then write a prompt describing how you want each instance updated — for example “rename to getUserById and update its arguments”.

Windsurf handles the heavy lifting, match by match. You pick between:

  • Smart mode — more thoughtful, safer.
  • or Fast mode — quicker, more aggressive.

Either way it’s regex with superpowers.

Easily make tweaks here and there and stay focused on the big picture.

Cascade upgrades: less clicking, more thinking

Cascade — Windsurf’s AI agent workspace — now has auto-planning baked in. No more manually toggling between modes. You give it a goal, it figures out a plan, and you review or edit that plan before it touches your code.

It also got better at working with long contexts, and the tools are snappier and more precise.

Dev containers + remote SSH

Invaluable new feature when working in containers and remote servers

Wave 12 adds support for Dev Containers over SSH. That means you can open a repo hosted on a remote machine inside a dev container and still use Windsurf like normal.

It basically brings the “works on my machine” experience to any machine.

Smarter tab autocomplete

Tab completion now feels way more alive. The suggestions are faster and smarter, especially when you’re working across multiple files. If you weren’t using it before, you’ll probably start now.

So much more

You’ll notice the new look right away — a cleaner, more focused UI across chat, Cascade, and the home panels.

Under the hood, they packed in over 100 fixes and performance upgrades. Stuff feels smoother and snappier all around.

Quickstart — try these:

  • DeepWiki: Hover any symbol → Cmd/Ctrl + Shift + Click → read, then click “Add to Cascade” to give it context.
  • Vibe & Replace: Search for something like foo(), then prompt it with “replace with bar() and update its args.”
  • Dev Containers: If you work remote, try “Reopen in Container” via the command palette — now works over SSH too.

With every Wave Windsurf moves closer to being a real engineering assistant that let’s us build much more than we we’ve ever been capable of.

Wave 12 isn’t just another version bump. It’s a real shift toward a more intelligent, less frustrating coding experience.

This major IDE just got an amazing new coding CLI

Wow this is amazing.

Cursor just launched a powerful new CLI tool that brings its coding agent directly to your terminal— use in ANY IDE.

Get AI-powered assistance into any environment: shells, JetBrains, containers, CI pipelines.

This isn’t just a helper to launch the Cursor editor from the terminal like the old cursor command.

It’s a full-fledged, headless agent that can read and edit files, understand multi-file context, and even run shell commands—with your approval.

Interactive and print

The CLI operates in two distinct modes:

Interactive mode is like chatting with the agent in real time. You give it a goal, and it responds by showing code changes or proposing terminal commands. You can review each step, approve changes, and iterate—all from the terminal.

Print mode (non-interactive) is designed for automation.

It lets you run single-shot prompts and return output in text, json, or stream-json formats—perfect for scripts, pipelines, or CI jobs that need structured results.

You can switch modes using flags like -p and --output-format.

Session control and customization

The CLI isn’t just one-shot.

You can resume previous sessions with cursor-agent resume, list old chats with cursor-agent ls, and keep long-term context across sessions.

It also supports slash commands like /model, /resume, and /quit, and you can specify which model to use (like the new gpt-5) with the -m flag. This matches the model flexibility from the GUI version of Cursor.

Secure file and command access

In interactive mode, the agent can propose terminal commands—but you must approve them before execution. This gives you peace of mind and full control. In print mode (used in CI), the agent has write access, so it’s recommended only for trusted environments.

Authentication can be done through a browser flow (cursor-agent login) or with an API key (CURSOR_API_KEY) for headless usage in CI.

MCP support. Of course

The CLI integrates smoothly with Cursor’s project rules, including .cursor/rules, AGENTS.md, and mcp.json if you’re using the Model Context Protocol (MCP).

This lets you define tool access, coding guidelines, and resources just once and use them across both the GUI and CLI.

This also means you can define workflows and plug your agent into real APIs, databases, or file systems—enabling powerful, real-world automation.

Easy installation and setup

Installation is one-liner simple:

Shell
curl https://cursor.com/install -fsS | bash cursor-agent login # Start CLI with a prompt cursor-agent chat "find one bug and fix it"

From there, you can start prompting, refactoring code, or reviewing pull requests—straight from your terminal.

Cursor’s new CLI turns its agent into a portable, flexible coding assistant that you can drop into any part of your workflow.

Whether you’re working in a minimalist terminal setup or building automation in CI or pairing it with your favorite editor this CLI opens the door to powerful, context-aware, headless development.

It’s currently in beta, so expect ongoing improvements—but if you want to bring intelligent, GPT-5-level coding into your daily terminal flow, this is the tool to try.

5 powerful MCP servers to make the most of GPT-5

GPT-5 is out now and AI agents are smarter than ever — but real power comes when you connect them to real-world tools.

That’s where the Model Context Protocol (MCP) comes in. It lets GPT-5 talk to external services like your databases and repos — not just to understand them, but to use them.

Whether you’re building your own agent or using GPT-5 through tools like Claude, Cursor, or Copilot, these 5 ready-to-use MCP servers instantly upgrade your AI’s capabilities.

1. Firebase MCP server

If you’re already using Firebase, this server lets GPT-5 interact with your project just like you would from the CLI. It’s perfect for app automation, debugging, and user management — all from natural language.

Key features

  • Instantly bootstrap Firebase projects and apps
  • Manage Auth: list users, set custom claims, enable/disable accounts
  • Query Firestore and validate security rules
  • Send push notifications via FCM
  • Supports Data Connect (GraphQL), Remote Config, and Crashlytics

Powerful use cases

  • GPT-5 can spin up a full Firebase app with Firestore, Auth, and hosting
  • Automatically summarize recent Auth activity and flag risky behavior
  • Simulate rule access and debug failed reads/writes
  • Analyze crash logs and propose fixes directly from Crashlytics data

Get Firebase MCP Server: LINK

2. GitHub MCP Server

This official GitHub server is built to give agents full visibility into your repos. It’s great for AI-assisted issue triage, PR generation, and CI/CD diagnostics.

Key features

  • Local or hosted (OAuth) mode
  • Granular control: enable only repos, issues, pull requests, etc.
  • Read-only toggle for safe use
  • Supports Dependabot, CodeQL, GitHub Actions workflows
  • Plays well with Claude, Cursor, and other tools

Powerful use cases

  • GPT-5 can scan issues across multiple repos and suggest the highest-impact fixes
  • Generate summaries of PR conversations and flag unresolved feedback
  • Automatically file issues for flaky tests or broken workflows
  • Navigate GitHub Actions runs to debug CI/CD failures

Get GitHub MCP Server: LINK

3. Notion MCP

This hosted server lets GPT-5 act on your Notion workspace — fetching, editing, creating, and organizing content across pages and databases.

Key features

  • One-click OAuth setup for ChatGPT, Claude, Cursor, etc.
  • Unified search across Notion, Slack, Google Drive, and Jira
  • Page and database read/write support
  • Supports Streamable HTTP or stdio (via mcp-remote)
  • Great for knowledge management and workflows

Powerful use cases

  • GPT-5 can summarize meeting notes, create task trackers, or plan product launches
  • Automatically organize messy pages into structured databases
  • Generate content drafts based on workspace context
  • Answer questions using data from across Notion, Slack, and Google Docs

Get Notion MCP: LINK

4. DevDb MCP

DevDb turns your local databases into something GPT-5 can explore and reason about — without needing manual SQL writing or schema diving.

Key features

  • MCP server built into DevDb VS Code extension — also works for Cursor and Windsurf
  • Auto-detects common frameworks like Laravel, Django, and Rails
  • Supports Postgres, MySQL, SQLite, and SQL Server
  • One-click JSON config for agent setup
  • GUI tools for table editing and inspection

Powerful use cases

  • GPT-5 can inspect schema and generate queries from natural language
  • Document relationships and suggest schema changes
  • Auto-generate database migrations or seed data
  • Explore foreign key chains and infer business logic

Get DevDb MCP Server: LINK

5. Sentry MCP

This server connects GPT-5 to real-time error and performance data from Sentry — and lets it go beyond monitoring into automated debugging and even code generation.

Key features

  • OAuth setup with hosted Streamable HTTP or SSE fallback
  • Explore issues, events, traces, and performance regressions
  • View project/org metadata and DSNs
  • Integrated Seer tool for root-cause analysis and autofix
  • Supports local self-hosted Sentry too

Powerful use cases

  • GPT-5 can analyze top crashes, trace them to root causes, and suggest fixes
  • Automatically file detailed bug reports linked to Sentry errors
  • Monitor app performance and alert when KPIs regress
  • Use Seer to run auto-diagnosis and push PRs

GPT-5 is out in the wild now and these MCP servers take its raw intelligence and plug it into real-world workflows.

From spinning up Firebase apps to analyzing crashes in Sentry or navigating your entire Notion workspace — this is the future of AI: not just being intelligent, but now making things happen at massive scale in the real world.

GPT-5 coding is wild

Developers are absolutely loving the new GPT-5 (def not all tho, ha ha).

It’s elevating our software development capabilities to a whole new level.

Everything is getting so much more effortless now:

You’ll see how it built the website so easily from the extremely detailed prompt from start from finish:

On SWE-bench Verified, which tests real GitHub issues inside real repos, GPT-5 hits 74.9% — the highest score to date.

Some people seem to really hate GPT-5 tho…

“SO SLOW!”

“HORRIBLE!”:

Not too sure what slowness they’re talking about.

I was even thinking it was noticeably faster than prev models when I first tried it in ChatGPT. Maybe placebo?

On Aider Polyglot, which measures how well it edits code via diffs, it reaches 88%.

“BLOATED”.

“WASTES TOKENS”.

GPT-5 can chain tool calls, recover from errors, and follow contracts — so it can scaffold a service, run tests, fix failures, and explain what changed, all without collapsing mid-flow.

“CLUELESS”.

But for many others these higher benchmark scores aren’t just theoretical — it’s making real impact in real codebases from real developers.

“Significantly better”:

See how Jetbrains Junie assistant so easily used GPT-5 to make this:

“The best”

“Sonnet level”

It’s looking especially good for frontend development, especially designing beautiful UIs.

In OpenAI’s tests, devs preferred GPT-5 over o3 ~70% of the time for frontend tasks. You can hand it a one-line brief and get a polished React + Tailwind UI — complete with routing, state, and styling that looks like it came from a UI designer.

The massive token limits GPT-5 has ensure that your IDEs have more than enough context from your codebase to give the most accurate results.

With ~400K total token capacity (272K input, 128K reasoning/output), GPT-5 can take entire subsystems — schemas, services, handlers, tests — and make precise changes. Long-context recall is stronger — so it references the right code instead of guessing.

GPT-5 is more candid when it lacks context and less prone to fabricate results — critical if you’re letting it touch production code.

Like it could ask you to provide more information instead of making stuff up — or assuming you meant something else that it’s more familiar with (annoying).

gpt-5, gpt-5-mini, and gpt-5-nano all share the same coding features, with pricing scaled by power.

The sweet spot for most devs: use minimal reasoning for micro-edits and bump it up for heavy refactors or migrations.

GPT-5 makes coding assistance feel dependable.

It handles the boring 80% so you can focus on the valuable 20%, and it does it with context, precision, and a lot less hand-holding.

GPT-5 is absolutely insane

Wow this is huge.

GPT-5 is finally here and it’s completely unbelievable.

Practically destroying every other model in several AI benchmarks.

This is a massive upgrade from the GPT-4x’s.

Grok 4 the model I was just talking about the other day saying it was the best…

GPT-5’s coding abilities are unreal.

GPT-5 absolutely dominates industry coding tests with benchmark scores of 74.9% on SWE-Bench Verified and 88% on Aider Polyglot.

Unbelievably cheap API for such massive intelligence improvements.

SWE-Bench Verified simulates real-world GitHub issues, and GPT-5’s first-attempt solve rate outperforms every competitor.

Two BILLION tokens per minute?!

Like what do you even say about that.

I mean of course with such a mind-bogglingly low cost it’s no wonder why every AI tool (and their mother) will jump on it.

All our favorite IDEs instantly added support for it without even thinking.

Windsurf — generous as always:

But stats only tell part of the story. The real magic is in how it feels to code with GPT-5.

Cursor — you can try it for free…

You don’t have to walk it through every little thing. You just tell it what you want — “build me a login system,” “refactor this into something clean,” “find the bug here” — and it does it. One go. No back and forth. No babying it.

On Aider Polyglot the model showed exceptional multilingual coding skills — it generated and debugged code in dozens of programming languages without missing a beat.

Copilot & VS Code — never to be left out…

GPT-5 feels less like a tool and more like a teammate who never gets tired, never forgets, and somehow knows everything.

JetBrains — their Junie assistant I was talking about the other time has been out for some time now.

Super impressive snake game generation:

Everybody is absolutely loving it.

And OpenAI didn’t just drop one version — they’ve also released GPT-5-mini and nano. These are smaller, faster versions that still give you much of the coding power, which is great if you’re working on a budget or just need something lightweight for quick jobs.

All of this adds up to something big. The way we write software is changing. More and more, your job as a developer isn’t to type every line, but to describe what you want — clearly, thoughtfully — and let the AI handle the heavy lifting. GPT-5 lets you move faster, take on bigger projects, and focus on the parts of programming that actually require creativity and judgment.

Bottom line? GPT-5 coding is nuts. It’s fast, smart, flexible, and it actually understands what you’re trying to do. Whether you’re a pro dev or just getting started, this model is going to change how you think about building software.

Forever.

Clean code is dead

If you’re still obsessed with writing “clean code” in 2025 then you are living in the stone age.

The clean code era is over. AI is here.

Your precious descriptive variable names,

Your admirable small functions,

Your meticulous design patterns and tireless refactorings…

All these things are far far less important now in the age of AI.

I can’t even remember the last time I created a variable by myself.

Wrote a function from scratch by myself?

Even created a file by myself? 🤔 (super rare)

Nobody codes like that anymore (sorry).

AI is here and modern developers don’t code that way.

Lol, you people and your annoying AI hype. Vibe coding is useless and AI can never replace programmers in any way. Stop talking nonsense.

Ha ha, yes I know some of you are still scoffing with disdain at the recent uprising of vibe coding and coding agents.

You proudly refuse to use even the slightest bit of AI in your coding.

Even basic Copilot code completions from 2021 are a no-no for you.

Well hate to break it to you but the world is leaving people like this behind.

There’s no going back — AI-first development is fast becoming the gold standard.

What matters most now is not clean or clear code — what matters is clear intent, goals, and context for the AI agent.

Not descriptive variable names — but now descriptive well-written prompts.

No longer just using the DRY principle in your code — but now also in all your AI interactions by setting powerful system prompts and personalized style guides.

No longer just about using the most intuitive and powerful libraries and APIs — but now using the all most powerful and highly capable MCP Servers.

Coding in 2025 is no longer about typing — it’s about thinking.

Actually it’s always been — but now the power of your thoughts has exploded drastically.

A thought, a design, an idea that took several days and weeks to be typed into life now takes a few minutes of prompting.

AI has astronomically expanded the power of our minds to do far more than ever before at any point in human history.

Should we still be wasting so much time obsessing over low-level details like whether we named our variables with snake case or camel case?

It’s time to level up and achieve our true potential.

Comprehensive context provisions, sophisticated prompting techniques, elaborate intent definitions, hyper-personalized system prompts, high-powered MCP server integrations…

These are the crucial things you need to focus right now.

These are what will turn you into a god-mode developer.