Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

GPT-5 coding is wild

Developers are absolutely loving the new GPT-5 (def not all tho, ha ha).

It’s elevating our software development capabilities to a whole new level.

Everything is getting so much more effortless now:

You’ll see how it built the website so easily from the extremely detailed prompt from start from finish:

On SWE-bench Verified, which tests real GitHub issues inside real repos, GPT-5 hits 74.9% — the highest score to date.

Some people seem to really hate GPT-5 tho…

“SO SLOW!”

“HORRIBLE!”:

Not too sure what slowness they’re talking about.

I was even thinking it was noticeably faster than prev models when I first tried it in ChatGPT. Maybe placebo?

On Aider Polyglot, which measures how well it edits code via diffs, it reaches 88%.

“BLOATED”.

“WASTES TOKENS”.

GPT-5 can chain tool calls, recover from errors, and follow contracts — so it can scaffold a service, run tests, fix failures, and explain what changed, all without collapsing mid-flow.

“CLUELESS”.

But for many others these higher benchmark scores aren’t just theoretical — it’s making real impact in real codebases from real developers.

“Significantly better”:

See how Jetbrains Junie assistant so easily used GPT-5 to make this:

“The best”

“Sonnet level”

It’s looking especially good for frontend development, especially designing beautiful UIs.

In OpenAI’s tests, devs preferred GPT-5 over o3 ~70% of the time for frontend tasks. You can hand it a one-line brief and get a polished React + Tailwind UI — complete with routing, state, and styling that looks like it came from a UI designer.

The massive token limits GPT-5 has ensure that your IDEs have more than enough context from your codebase to give the most accurate results.

With ~400K total token capacity (272K input, 128K reasoning/output), GPT-5 can take entire subsystems — schemas, services, handlers, tests — and make precise changes. Long-context recall is stronger — so it references the right code instead of guessing.

GPT-5 is more candid when it lacks context and less prone to fabricate results — critical if you’re letting it touch production code.

Like it could ask you to provide more information instead of making stuff up — or assuming you meant something else that it’s more familiar with (annoying).

gpt-5, gpt-5-mini, and gpt-5-nano all share the same coding features, with pricing scaled by power.

The sweet spot for most devs: use minimal reasoning for micro-edits and bump it up for heavy refactors or migrations.

GPT-5 makes coding assistance feel dependable.

It handles the boring 80% so you can focus on the valuable 20%, and it does it with context, precision, and a lot less hand-holding.

GPT-5 is absolutely insane

Wow this is huge.

GPT-5 is finally here and it’s completely unbelievable.

Practically destroying every other model in several AI benchmarks.

This is a massive upgrade from the GPT-4x’s.

Grok 4 the model I was just talking about the other day saying it was the best…

GPT-5’s coding abilities are unreal.

GPT-5 absolutely dominates industry coding tests with benchmark scores of 74.9% on SWE-Bench Verified and 88% on Aider Polyglot.

Unbelievably cheap API for such massive intelligence improvements.

SWE-Bench Verified simulates real-world GitHub issues, and GPT-5’s first-attempt solve rate outperforms every competitor.

Two BILLION tokens per minute?!

Like what do you even say about that.

I mean of course with such a mind-bogglingly low cost it’s no wonder why every AI tool (and their mother) will jump on it.

All our favorite IDEs instantly added support for it without even thinking.

Windsurf — generous as always:

But stats only tell part of the story. The real magic is in how it feels to code with GPT-5.

Cursor — you can try it for free…

You don’t have to walk it through every little thing. You just tell it what you want — “build me a login system,” “refactor this into something clean,” “find the bug here” — and it does it. One go. No back and forth. No babying it.

On Aider Polyglot the model showed exceptional multilingual coding skills — it generated and debugged code in dozens of programming languages without missing a beat.

Copilot & VS Code — never to be left out…

GPT-5 feels less like a tool and more like a teammate who never gets tired, never forgets, and somehow knows everything.

JetBrains — their Junie assistant I was talking about the other time has been out for some time now.

Super impressive snake game generation:

Everybody is absolutely loving it.

And OpenAI didn’t just drop one version — they’ve also released GPT-5-mini and nano. These are smaller, faster versions that still give you much of the coding power, which is great if you’re working on a budget or just need something lightweight for quick jobs.

All of this adds up to something big. The way we write software is changing. More and more, your job as a developer isn’t to type every line, but to describe what you want — clearly, thoughtfully — and let the AI handle the heavy lifting. GPT-5 lets you move faster, take on bigger projects, and focus on the parts of programming that actually require creativity and judgment.

Bottom line? GPT-5 coding is nuts. It’s fast, smart, flexible, and it actually understands what you’re trying to do. Whether you’re a pro dev or just getting started, this model is going to change how you think about building software.

Forever.

Clean code is dead

If you’re still obsessed with writing “clean code” in 2025 then you are living in the stone age.

The clean code era is over. AI is here.

Your precious descriptive variable names,

Your admirable small functions,

Your meticulous design patterns and tireless refactorings…

All these things are far far less important now in the age of AI.

I can’t even remember the last time I created a variable by myself.

Wrote a function from scratch by myself?

Even created a file by myself? 🤔 (super rare)

Nobody codes like that anymore (sorry).

AI is here and modern developers don’t code that way.

Lol, you people and your annoying AI hype. Vibe coding is useless and AI can never replace programmers in any way. Stop talking nonsense.

Ha ha, yes I know some of you are still scoffing with disdain at the recent uprising of vibe coding and coding agents.

You proudly refuse to use even the slightest bit of AI in your coding.

Even basic Copilot code completions from 2021 are a no-no for you.

Well hate to break it to you but the world is leaving people like this behind.

There’s no going back — AI-first development is fast becoming the gold standard.

What matters most now is not clean or clear code — what matters is clear intent, goals, and context for the AI agent.

Not descriptive variable names — but now descriptive well-written prompts.

No longer just using the DRY principle in your code — but now also in all your AI interactions by setting powerful system prompts and personalized style guides.

No longer just about using the most intuitive and powerful libraries and APIs — but now using the all most powerful and highly capable MCP Servers.

Coding in 2025 is no longer about typing — it’s about thinking.

Actually it’s always been — but now the power of your thoughts has exploded drastically.

A thought, a design, an idea that took several days and weeks to be typed into life now takes a few minutes of prompting.

AI has astronomically expanded the power of our minds to do far more than ever before at any point in human history.

Should we still be wasting so much time obsessing over low-level details like whether we named our variables with snake case or camel case?

It’s time to level up and achieve our true potential.

Comprehensive context provisions, sophisticated prompting techniques, elaborate intent definitions, hyper-personalized system prompts, high-powered MCP server integrations…

These are the crucial things you need to focus right now.

These are what will turn you into a god-mode developer.

This new AI tool from Google just destroyed web & UI designers

Wow this is absolutely massive.

The new Stitch tool from Google may have just completely ruined the careers of millions of web & UI designers — and it’s only just get started.

Just check out these stunning designs:

This is an absolute game changer for anyone who’s ever dreamed of building an app but felt intimidated by the whole design thing.

It’s a huge huge deal.

Just imagine you have a classic app idea — photo sharing app, workout app, todo-list whatever…

❌ Before:

You either hire a designer or spend hours wrestling with design software trying to create a pixel perfect UI.

Or maybe you even just try to wing it and hope for the best, making crucial design decisions on the fly as you develop the app.

✅ Now:

Just tell Stitch whatever the hell you’re thinking.

Literally just describe your app in plain English.

“A blue-themed photo-sharing app”:

Look how Stitch let me easily the design — adding likes for every photo:

Or, if you’ve got a rough sketch on a napkin, snap a pic and upload it. Stitch takes your input, whatever it is, and then — BOOM — it generates a visual design for your app’s user interface. It’s like having a personal UI designer at your fingertips.

But it doesn’t stop there. This is where it gets really cool — especially for developers.

Stitch doesn’t just give you a pretty picture. It also spits out the actual HTML and CSS code that brings that design to life.

Suddenly your app concept isn’t just an idea — it’s a working prototype.

How amazing is that?

Stitch is pretty smart too. It can give you different versions of your design, so you can pick the one you like best. You can also tweak things – change the colors, switch up the fonts, adjust the layout. It’s incredibly flexible. And if you want to make changes, just chat with Stitch. Tell it what you want to adjust, and it’ll make it happen. It’s a conversation, not a command line.

Behind all this magic are Google’s powerful AI models, Gemini 2.5 Pro and Gemini 2.5 Flash. These are the brains making sense of your ideas and turning them into designs and code. The whole process is surprisingly fast.

Who is this for?

Everyone. If you’re a complete beginner with zero design or coding experience, Stitch is your new best friend. You can create professional-looking apps without breaking a sweat.

It’s a fantastic way to rapidly prototype ideas and get a head start on coding for seasoned developers.

Right now, Stitch is in public beta, available in 212 countries, though it only speaks English for now. And yes, you can use it for free, with a monthly limit on how many designs you can generate.

It’s a super-powered starting gun for your app development journey. It streamlines the early stages to get you from a raw idea to a tangible design and code much faster.

And if you still want more fine-grained control, you can always export your design to Figma.

So, if you’ve got an app idea bubbling in your mind, Google Stitch might just be the tool you’ve been waiting for to bring it to life.

The most powerful AI model in the world just got a coding CLI

Wow this is HUGE.

This most intelligent AI model on the planet just got an incredible new coding CLI.

Grok — the genius model from xAI sitting comfortably in #1 of several notable AI benchmarks.

Massive boost in speed and power in your software development.

Carry out massive context-aware tasks on your codebase with simple English right from your CLI:

  • Refactor functions or files
  • Edit project code based on coding standards
  • Generate shell scripts or bash commands
  • Automate git operations
  • Integrate with any MCP server

What took an hour of manual coding and debugging before → Now: a few minutes.

Just describe whatever you want and it takes care of the rest.

grok-cli is open-source and growing FAST with dozens of contributions already.

Conversational

Launch Grok CLI with the grok command:

  • Opens an interactive session
  • Remembers previous prompts
  • Lets you refine requests naturally — like a real-life conversation.

Context-aware

Grok CLI reads your project files — it knows what it’s editing.

Ask it to:

  • Change the logic in a file
  • Rename functions
  • Split code into modules

It works from actual source context — no guesses or hallucinations.

Shell and bash

Ask something like “compress all images in the assets folder” or “create a backup of the .env file” — Grok CLI will generate and run the correct commands — like zip.

Built-in

Grok CLI includes smart tools for file editing, git management, and navigation.

These tools run automatically when your prompt matches what they handle — giving you a clean interface without memorizing commands.

Project-specific instructions

By placing a .grok/GROK.md file in your repo, you can customize how Grok CLI behaves.

For example, you can specify preferred frameworks, coding standards, or architectural constraints. This makes it ideal for team workflows and codebases with strict conventions.

Large project context support

Grok CLI supports large context windows. It can review and reason over entire projects or multiple files at once, making it suitable for complex refactors, architecture reviews, or onboarding support.

Installation and setup

To get started you need:

  • Node.js version 16 or higher
  • A Grok API key from x.ai

Installation is straightforward:

npm install -g @vibe-kit/grok-cli

Then run:

grok

You can also run one-off prompts with:

grok --prompt "What does this function do?"

Set API keys with:

  • GROK_API_KEY environment variable
  • .env file
  • CLI --apiKey flag
  • ~/.grok/user-settings.json file

You can also configure the base URL and model via the same methods.

Workflow integration

Grok CLI is designed to work with your existing tools. It supports both interactive and headless modes.

  • Interactive mode: Launch with grok to enter a conversational session.
  • Headless mode: Use --prompt for scripting, CI pipelines, or automation.
  • Custom configuration: Use .grok/settings.json to define per-project behavior.

The CLI makes it easy to integrate AI into your day-to-day development without changing your habits or workflow.

Extend Grok CLI with MCP

One of Grok CLI’s most advanced features is support for the Model Context Protocol (MCP).

With MCP, you can connect Grok CLI to external services—like issue trackers, deployment tools, or custom APIs—and control them through the terminal.

Example:

grok mcp add linear --command "node" --args mcp-linear.js

Grok CLI will now be able to read issues from Linear, create tasks, or pull data—all through AI commands.

The built-in mcp command supports:

  • add: add a new server
  • list: view connected MCP servers
  • test: verify a server’s functionality
  • remove: disconnect a server

This turns Grok CLI from a code assistant into a full-fledged AI agent platform.

Grok CLI is more than just another AI chatbot—it’s a new kind of developer tool.

It brings powerful language models right into your command line — blending natural language prompts with deep integration into your codebase and workflow.

If you’re a developer who spends time in the terminal and wants to work faster, write better code, and automate tedious tasks using AI, Grok CLI is absolutely worth a try.

These 5 MCP servers reduce AI code errors by 99% (perfect context)

AI coding assistants are amazing and powerful—until they start lying.

Like it just gets really frustrating when they hallucinate APIs or forget your project structure and break more than they fix.

And why does this happen?

Context.

They just don’t have enough context.

Context is everything for AI assistants. That’s why MCP is so important.

These MCP servers fix that. They ground your AI in the truth of your codebase—your files, libraries, memory, and decisions—so it stops guessing and starts delivering.

These five will change everything.

Context7 MCP Server

Context7 revolutionizes how AI models interact with library documentation—eliminating outdated references, hallucinated APIs, and unnecessary guesswork.

It sources up-to-date, version-specific docs and examples directly from upstream repositories — to ensure every answer reflects the exact environment you’re coding in.

Whether you’re building with React, managing rapidly evolving dependencies, or onboarding a new library, Context7 keeps your AI grounded in reality—not legacy docs.

It seamlessly integrates with tools like Cursor, VS Code, Claude, and Windsurf, and supports both manual and automatic invocation. With just a line in your prompt or an MCP rule, Context7 starts delivering live documentation, targeted to your exact project context.

Key features

  • On-the-fly documentation: Fetches exact docs and usage examples based on your installed library versions—no hallucinated syntax.
  • Seamless invocation: Auto-invokes via MCP client config or simple prompt cues like “use context7”.
  • Live from source: Pulls real-time content straight from upstream repositories and published docs.
  • Customizable resolution: Offers tools like resolve-library-id and get-library-docs to fine-tune lookups.
  • Wide compatibility: Works out-of-the-box with most major MCP clients across dozens of programming languages.

Errors it prevents

  • Calling deprecated or removed APIs
  • Using mismatched or outdated function signatures
  • Writing syntax that no longer applies to your version
  • Missing new required parameters or arguments
  • Failing to import updated module paths or packages

Powerful use cases

  • Projects built on fast-evolving frameworks like React, Angular, Next.js, etc.
  • Onboarding to unfamiliar libraries without constant tab switching
  • Working on teams where multiple versions of a library may be in use
  • Auditing legacy codebases for outdated API usage
  • Auto-generating code or tests with correct syntax and parameters for specific versions

Get Context7 MCP Server: LINK

Memory Bank MCP Server

The Memory Bank MCP server gives your AI assistant persistent memory across coding sessions and projects.

Instead of repeating the same explanations, code patterns, or architectural decisions, your AI retains context from past work—saving time and improving coherence. It’s built to work across multiple projects with strict isolation, type safety, and remote access, making it ideal for both solo and collaborative development.

Key features

  • Centralized memory service for multiple projects
  • Persistent storage across sessions and application restarts
  • Secure path traversal prevention and structure enforcement
  • Remote access via MCP clients like Claude, Cursor, and more
  • Type-safe read, write, and update operations
  • Project-specific memory isolation

Errors it prevents

  • Duplicate or redundant function creation
  • Inconsistent naming and architectural patterns
  • Repeated explanations of project structure or goals
  • Lost decisions, assumptions, and design constraints between sessions
  • Memory loss when restarting the AI or development environment

Powerful use cases

  • Long-term development of large or complex codebases
  • Teams working together on shared projects needing consistent context
  • Developers aiming to preserve and reuse design rationale across sessions
  • Projects with strict architecture or coding standards
  • Solo developers who want continuity and reduced friction when resuming work

Get Memory Bank MCP Server: LINK

Sequential Thinking MCP Server

Definitely one of the most important MCP servers out there anywhere.

It’s designed to guide AI models through complex problem-solving processes — it enables structured and stepwise reasoning that evolves as new insights emerge.

Instead of jumping to conclusions or producing linear output, this server helps models think in layers—making it ideal for open-ended planning, design, or analysis where the path forward isn’t immediately obvious.

Key features

  • Step-by-step thought sequences: Breaks down complex problems into numbered “thoughts,” enabling logical progression.
  • Reflective thinking and branching: Allows the model to revise earlier steps, fork into alternative reasoning paths, or return to prior stages.
  • Dynamic scope control: Adjusts the total number of reasoning steps as the model gains more understanding.
  • Clear structure and traceability: Maintains a full record of the reasoning chain, including revisions, branches, and summaries.
  • Hypothesis testing: Facilitates the generation, exploration, and validation of multiple potential solutions.

Errors it prevents

  • Premature conclusions due to lack of iteration
  • Hallucinated or shallow reasoning in complex tasks
  • Linear, single-path thinking in areas requiring exploration
  • Loss of context or rationale behind decisions in multi-step outputs

Powerful use cases

  • Planning and project breakdowns
  • Software architecture and design decisions
  • Analyzing ambiguous or evolving problems
  • Creative brainstorming and research direction setting
  • Any situation where the model needs to explore multiple options or reflect on its own logic

Once you install it, it becomes a powerful extension of your model’s cognitive abilities—giving you not just answers, but the thinking behind them.

Get Sequential Thinking MCP Server: LINK

Filesystem MCP Server

The Filesystem MCP server provides your AI with direct, accurate access to your local project’s structure and contents.

Instead of relying on guesses or hallucinated paths, your agent can read, write, and navigate files with precision—just like a developer would. This makes code generation, refactoring, and debugging dramatically more reliable.

No more broken imports, duplicate files, or mislocated code. With the Filesystem MCP your AI understands your actual workspace before making suggestions.

Key features

  • Read and write files programmatically
  • Create, list, and delete directories with precise control
  • Move and rename files or directories safely
  • Search files using pattern-matching queries
  • Retrieve file metadata and directory trees
  • Restrict all file access to pre-approved directories for security

Ideal scenarios

  • Managing project files during active development
  • Refactoring code across multiple directories
  • Searching for specific patterns or code smells at scale
  • Debugging with accurate file metadata
  • Maintaining structural consistency across large codebases

Get FileSystem MCP: LINK

GitMCP

AI assistants can hallucinate APIs, suggest outdated patterns, and sometimes overwrite code that was just written.

GitMCP solves this by making your AI assistant fully git-aware—enabling it to understand your repository’s history, branches, files, and contributor context in real time.

Whether you’re working solo or in a team, GitMCP acts as a live context bridge between your local development environment and your AI tools. Instead of generic guesses, your assistant makes informed suggestions based on the actual state of your repo.

GitMCP is available as a free, open-source MCP server, accessible via gitmcp.io/{owner}/{repo} or embedded directly into clients like Cursor, Claude Desktop, Windsurf, or any MCP-compatible tool. You can also self-host it for privacy or customization.

Key features

  • Full repository indexing with real-time context
  • Understands commit and branch history
  • Smart suggestions based on existing code and structure
  • Lightweight issue and contributor context integration
  • Live access to documentation and source via GitHub or GitHub Pages
  • No setup required for public repos—just add a URL and start coding

Errors it prevents

  • Code conflicts with recent commits
  • Suggestions that ignore your branching strategy
  • Overwriting teammates’ changes during collaboration
  • Breaking functionality due to missing context
  • AI confusion from outdated or hallucinated repo structure

Ideal scenarios

  • Collaborating in large teams with frequent commits
  • Working on feature branches that need context-specific suggestions
  • Reviewing and resolving code conflicts with full repo awareness
  • Structuring AI-driven workflows around GitHub issues
  • Performing large-scale refactors across multiple files and branches

Get GitMCP: LINK

This new AI coding agent from Google is unbelievable

Wow this is insane.

This new AI coding agent from Google is simply incredible. Google is getting dead serious about dev tooling. No more messing around.

Jules is a genius agent can understand your intent, plan out steps, and execute complex coding tasks without even trying.

A super-smart teammate who can tackle coding tasks on its own asynchronously to make software dev so much easier.

It works seamlessly in the background so you can focus on other important stuff.

Gemini 2.5 Pro

Jules is powered by Gemini 2.5 Pro, which is Google’s advanced AI model for complex tasks. This gives it serious brainpower for understanding code.

And you bet 2.5 Flash is on its way to give it even more insane speeds.

When you give Jules a task it clones your codebase into a secure virtual machine in the Google Cloud. This is like a private workspace where Jules can experiment safely without messing with your live code.

It then understands the full context of your project. This is crucial because it helps Jules make smart, relevant changes. It doesn’t just look at isolated bits of code; it sees the whole picture.

After it’s done, Jules shows you its plan, its reasoning for the changes, and a “diff” of what it changed. You get to review everything and approve it before it goes into your main project. It even creates pull requests for you on GitHub!

Jules is quite the multi-tasker. It can handle a variety of coding chores you might not enjoy doing yourself.

For example, it can write tests for your code, which is super important for quality. It can also build new features from scratch, helping you speed up development.

Bug fixing? Yeah Jules can do that too. It can even bump dependency versions, which can be a tedious and error-prone task.

One cool feature is its audio changelogs. Jules can give you spoken summaries of recent code changes, turning your project history into something you can simply listen to.

Google has made it clear that you’re always in charge. Jules doesn’t train on your private code, and your data stays isolated. You can review and modify Jules’s proposed plans at every step.

It works directly with GitHub, so it integrates seamlessly with your existing workflow. No need to learn a new platform or switch between different tools.

Jules is currently in public beta, and it’s free to use with some limits. This is a big step towards “agentic development,” where AI systems take on more responsibility in the software development process.

It might sound like Jules is coming for developer jobs, but that’s probably not the goal here — at least for now.

Jules is meant to be a powerful tool that frees up developers to focus on higher-level thinking, design, and more creative problem-solving. It’s about making you more productive and efficient.

So, if you’re a developer, now’s a great time to check out Jules. It could really change the way you work.

This new IDE from Amazon is an absolute game changer

Woah Amazon’s new Kiro IDE is absolutely HUGE.

And if you think this is just another Cursor or Copilot competitor then you are dead wrong on the spot…

This is a revolutionary approach to how AI coding assistants are supposed to be… This is real software development.

Development based on real GOALS — not just randomly prompting an agent here and there.

No more blind stateless changes — everything is grounded on real specs, requirements, and goals 👇

Amazon Kiro understands that you’re not just coding for coding sake — you have actual targets in mind.

Targets it can even define for you — in an incredibly detailed and comprehensive way:

Look — it can even make unbelievably sophisticated designs for you based on your requirements 👇

I told you — this is REAL software development.

This is just one of the incredibly innovative Kiro features that no other IDE has.

And guess what — It’s based on VS Code — switching is so ridiculously easy — you can even keep your VS Code settings and most extensions.

Goal-first thinking, agentic automation, and deep integration with tools developers already use.

Two big ideas

Kiro is based on two big ideas it implements in an unprecedented way:

Spec-driven development

This is very similar to what Windsurf tried to do with their recent Markdown planning mode update:

You don’t start with code. You start with intent — written in natural language or diagrams.

These specs live alongside your codebase, guiding it as the project evolves. Kiro continuously uses them to generate and align features, update documentation, track architectural intent, and catch inconsistencies.

Background hooks

This one is absolutely insane — how come no one ever thought of this until now?

Hooks — automated agents that run in the background. As you code, they quietly:

  • Generate and update docs
  • Write and maintain tests
  • Flag technical debt
  • Improve structure and naming
  • Ensure the implementation matches your specs

This isn’t just a chat window on the side. This is an always-on assistant that sees the entire project and works with you on every save.

Under the hood

Code OSS Core

Kiro is built on Code OSS — the same open-source engine behind VS Code. Your extensions, keybindings, and theme carry over seamlessly. Zero learning curve.

MCP Integration

It supports the Model Context Protocol, allowing Kiro to call external agents and tools through a shared memory layer. This sets it up for a future of multi-agent collaboration that’s already taking shape.

Model Support

Kiro runs on Claude Sonnet 4.0 by default, with fallback to Claude 3.7. Support for other models, like Gemini, is on the roadmap — and the architecture is designed to be model-flexible.

Massive demand already

Kiro is in free preview right now — but massive demand has already forced AWS to cap usage and implement a waitlist.

A full pricing structure is on the way — something like:

  • Free: 50 interactions/month
  • Pro: $19/month for 1,000 interactions
  • Pro+: $39/month for 3,000 interactions

Signups are open, but usage is currently restricted to early testers.

Better

If you’ve used Cursor or Windsurf, you already know how powerful it is to have agentic workflows built directly into your IDE.

Kiro builds on that foundation — but shifts from reactive prompting to proactive structure. It doesn’t just assist your coding. It tries to own the meta-work: the tests you skip, the docs you forget, the loose ends that add up over time.

That’s where Kiro stakes its claim — not just as a smart code editor, but as an operating system for full-stack development discipline.

Don’t ignore this

Kiro is still early, but it’s not experimental in spirit. It’s built with a clear vision:

  • Bring AI into every layer of the software kdevelopment process
  • Anchor work around intent, not just implementation
  • Support fast prototyping and scalable production with equal seriousness

For solo builders and teams alike Kiro is most definitely worth keeping an eye on.

Not just for what it does now, but for what it signals about where modern development is headed.

MCP is an absolute game changer but what is it??

MCP MCP… what’s this annoying MCP thing I keep seeing everywhere? What does it even mean?

Just another annoying AI buzzword that will die out soon right?

If you think that then you are dead wrong. This is HUGE.

MCP is massive new invention that is going to change everything.

It’s a major milestone for LLMs and agents and the AI revolution in general.

Bla bla bla… what does it mean Tari?? Start talking or GTFO!

Okay okay I will now but you really need to understand just how much of a game changer this is.

So this is the part where I tell you that MCP is an acronym that means Model Context Protocol — but that wouldn’t be saying much and I’ll be boring you to sleep.

The best thing is to give you a fascinating and powerful real-world analogy that you will understand instantly without even trying.

Think of the brain. The human brain.

Your brain is incredibly powerful — far far more intelligent than all the other species (combined??). Not even close.

But have you ever realized that your brain is… USELESS?!

Duuuude just effin explain MCP for me, I didn’t ask for your insults

No listen I’m not calling you dumb or anything like that — what I mean is — your brain can’t DO anything on it’s own, as far as the external world is concerned.

The only way your brain actually does anything in the physical world is by being connected to the rest of your body.

You could be the greatest genius the universe has ever seen.

You could THINK of the most beautiful poem, the most persuasive argument, the funniest joke in history…

You could IMAGINE the next great novel, the blueprint for a world-changing invention, the cure cancer…

You may KNOW where you want to go — the perfect destination, the goal, the person to meet, the protest to join, the stage to walk onto…

But without your biological tools — your mouth, your eyes, your hands, your feet… without these things, you are as good as non-existent.

There will absolutely nothing you can DO. You are completely powerless as far as the outside world is concerned.

So are you getting it now? This is the problem we’ve had with LLMs.

Since 2022 they’ve wowed us with their insane creativity and coding and summarization ability and so much more.

But they couldn’t really DO anything.

All they did was provide us with information. Even when AI agents came along, all they could do was reason and think (“think”).

It was our brains that then used the information to manually make things happen in the world.

This is what MCP changes forever.

They can think and process information, and now also perform real world actions.

They can get crucial data from ANY external data source to do ANYTHING.

AI agents are now in absolute god mode.

With MCP Servers the possibilities are now literally endless.

MCP Server. That’s just the name for anything that provide tools to an AI agent.

The MCP Server provides the tools. MCP tells it HOW to provide the tools.

For example:

  • GitHub MCP Server: provides tools that let’s the AI agent search any GitHub repo.
  • WhatsApp MCP Server: provides tools that let you search WhatsApp chats.

You see the real game changer of MCP is NOT actually about what all these incredible MCP Servers let you do.

Before MCP it was already somewhat possible to make LLMs interact with external services. For example manipulate the prompt or fine-tuning the model to provide some sort of JSON response of tools to call.

This was what platforms like AutoGPT and BabyAGI tried to do. It’s also why OpenAI and Google added function calling support to their models.

The main problem with these different attempts is exactly that — they were different. Or private like for GPT and Gemini function calling.

There was no standard and open way to let anyone create an AI agent that could have external tools plugged into it. Or to create tools to plug into any AI agent.

But now MCP has changed all that.

Now anyone can create programs that provide tools to make AI agents more powerful — without knowing anything about the agents beforehand.

That’s why we’ve been calling MCP the USB-C of AI applications.

USB-C standardized how tons of devices that needs to send all sorts of data to each other communicate.

Now as a manufacturer you no longer need to worry about the specific devices your users have and could need to connect your product to.

Just make sure it comes with a USB-C port and you’re good.

They can connect it to all the other billions of devices in the world that connect to USB-C — to provide or receive power or any kind of data.

And not just existing devices but ALL future devices — as long as they also support it.

This is the same game-changing thing MCP lets us do.

No more need for prompt manipulation or model-specific function calling.

And anyone can create agentic apps to use these tools, without knowing anything about the tools beforehand.

As long as the AI agent creator and MCP Server maker abide by the protocol, they will work with each other.

So that why it is really major milestone for the AI revolution.

Anything you can write code to do — an AI agent can do now.

Search your files in the cloud, update databases, control smart switches…

It drastically makes your AI agents waaay way more powerful — making YOU more powerful.

Huge Claude 4 coding news for this IDE

Wow this is some incredible news…

Claude 4 Sonnet is now available in Windsurf with no API key! (No more BYOK in Cascade).

You no longer have to pay additional costs for the amazing model.

And the coding has been absolutely insane 👇

For a while now people have been pointing out how amazing they find Claude 4 Sonnet, especially compared to Gemini 2.5 Pro and GPT-4.1. And this isn’t just hype – the difference is showing up in real-world workflows, especially in long context tasks, clean refactoring, and deep architectural suggestions.

And remember this is the junior sibling of Claude 4 Opus — that incredible model that literally did all the coding by itself in a massive project for a full hour and a half…

That was 90 actual minutes of total hands-free autonomous coding genius with zero bugs. Opus 4 planned, coded, edited, and completed an entire full-stack project, and Claude 4 Sonnet shares a massive chunk of that DNA. In fact, for a lot of development tasks, especially within a controlled and optimized coding environment like Windsurf, the gap between Sonnet and Opus is surprisingly small.

What makes this even more monumental is the fact that Windsurf had previously been locked out of native Claude 4 support.

When Claude 4 launched back in May, Anthropic explicitly restricted direct Windsurf access, most likely due to the intense competitive landscape and the recent strategic moves surrounding Windsurf — including rumors of OpenAI acquiring the company and Google’s subsequent licensing deal for Windsurf’s code-generation platform.

Disgustingly verbose tho?

The workaround was BYOK — “bring your own key.” That meant if you wanted to use Sonnet or Opus in Windsurf, you had to sign up for the Claude API separately, manage your own usage, and copy-paste keys manually. It worked — but it broke the seamless, fluid experience Windsurf is known for. It was a turbulent journey for users and the platform alike.

That’s now over. As of July 17, Claude 4 Sonnet is directly integrated into Windsurf again. You open the app, click a dropdown, and Sonnet’s there — no more hacks, no more limits. This signifies a successful restoration of support and improved collaboration between Windsurf and Anthropic, much to the relief of the developer community. It’s clean, fast, and shockingly good.

In fact, this might just be the best Claude experience available anywhere right now. The way Sonnet integrates into Cascade — Windsurf’s multi-agent AI flow system — feels like watching the future unfold in real time. Cascade breaks your prompts into intelligent stages, keeps memory across actions, and even offers live suggestions while you type. Now, with the raw power of Sonnet 4 plugged into that, it feels like pair programming with an elite coder who has already thoroughly digested your entire codebase.

The 200K context window means it can see everything — not just your current file, but your whole project: imports, dependencies, comments, TODOs, legacy bugs. Sonnet reads all of it, understands it, and then acts on it with unparalleled precision. You can ask it to upgrade your framework, optimize a specific component, or redesign an entire backend architecture — and it doesn’t blink. It just does it.

Add to that multi-file refactoring, which is handled intelligently without you needing to manually stage files or explain how everything is connected. Just describe the goal, and Claude intelligently does the wiring, making complex changes feel effortless.

The code it writes doesn’t feel “AI-generated.” It feels like code written by someone experienced — it follows the tone and patterns of your project, names things sensibly, and almost never makes you stop and think, “Wait, what is this supposed to be?”

For Pro users, you get 250 calls/month, billed at 2× credits — but for the sheer quality and effectiveness of Sonnet, that’s a deal that quickly pays for itself. Claude’s output is so effective that it drastically cuts out a ton of trial and error, which ultimately saves more time (and more credits) than even faster models that often need constant babysitting and manual correction.

Windsurf’s focus on enterprise-grade security and compliance (SOC 2 Type 2, FedRAMP High, HIPAA) enhances its value even more to make it a powerful solution for organizations seeking both efficiency and peace of mind.

This is all part of a major new string of updates from Windsurf, solidifying its position at the cutting edge of AI-powered development. With Claude 4 Sonnet now fully native, there’s truly no friction. No switching tabs, no key juggling, no API rate worries. Just open your editor and build.

And we haven’t even seen what happens if Opus 4 gets native access next.

This isn’t just a good update — it’s a massive leap forward for developer productivity and the future of coding. If you’ve been sleeping on Claude and Windsurf, now’s the time to wake up and ship.