Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

Claude Code finally fixed its biggest flaw — this is huge

Every single developer using Claude Code is about to get way more powerful & productive than they already are.

This new Claude Code update finally fixes a major issue that’s been negatively impacting its accuracy for several months now — and many of us were never even aware.

All this time, Claude Code has been bloating up your context in the background with unnecessary data from every single one of your MCP servers.

It didn’t matter whether you actually used them or not in any given prompt — if you have 100 MCP servers, it would dump all the complex tool definitions and metadata for all of them into your context, with no exceptions.

Drowning out context that actually matters and lowering the accuracy.

But now with new Tool Search feature in Claude Code, this problem is finally gone forever.

They’ve fixed everything — and they did it in such an amazing way — web developers would be jealous.

The old MCP experience was quietly broken

Here’s what was happening before:

  • You connect a few MCP servers
  • Each server exposes a bunch of tools
  • Claude loads all of them at startup
  • Your context window gets eaten alive
  • Tool selection gets worse as your tool list grows

So even before Claude starts thinking about your actual code, it’s already wasting tokens on tool schemas you may never use in that session.

The more “power user” you became, the worse things got.

That’s backwards.

Tool Search changes everything — with a neat trick from web dev

With Tool Search enabled, Claude Code stops doing dumb work up front.

Instead of loading everything, it does this:

  • Nothing is loaded at startup
  • Claude keeps MCP tools out of context by default
  • When a task comes up, Claude searches for relevant tools
  • Only the tools it actually needs get pulled in
  • Everything else stays out of the way

Same MCP. Same tools.
But with lazy loading: Massively better behavior.

This is exactly how modern AI tooling should work.

Why this is so huge

1. You instantly get more usable context

This is the obvious win — and it matters a lot.

Tool schemas can be massive. When you’re running multiple MCP servers, you’re talking thousands (sometimes tens of thousands) of tokens wasted on definitions alone.

Lazy loading gives that space back to:

  • real code
  • repo context
  • actual reasoning

That alone makes Claude Code feel noticeably smarter.

2. Tool selection gets better, not worse

Too many tools hurt accuracy in another crucial way:

When a model sees a huge wall of tools, it’s harder for it to consistently pick the right one. Lazy loading narrows the decision space.

Claude now:

  • searches for tools relevant to this task
  • loads a small, focused set
  • chooses more reliably

That’s not theoretical — it’s how Anthropic designed Tool Search to scale.

3. MCP finally scales the way you always wanted

Before this update, connecting more MCP servers felt risky:

“Am I about to blow up my context just by having this enabled?”

But now you can keep everything connected.

With lazy loading, unused MCP servers are basically free. They don’t cost context until Claude actually needs them.

That changes how you think about building and composing MCP ecosystems.

It turns on automatically (which is perfect)

Claude Code enables Tool Search automatically once your MCP tool definitions would take more than 10% of the context window.

That’s smart:

  • small setups stay simple
  • big setups get optimized
  • no babysitting required

Very important: This changes how MCP servers should be written

Because Claude now searches for tools instead of seeing them all at once, your MCP server descriptions actually matter.

Good servers:

  • clearly state what problems they solve
  • make it obvious when Claude should use them
  • have clean, intentional tool naming

Bad descriptions = your tools don’t get discovered.

Lazy loading turns MCP servers into discoverable “capabilities” instead of background noise.

Google just made AI coding agents more powerful than ever

This is going to have such a massive positive impact on the accuracy and reliability of AI agents in software development.

The new Skills feature in the Google Antigravity IDE finally solves the problem of AI agents giving us wildly unpredictable/inaccurate results for the same prompt.

Too little context is terrible for agent accuracy — but things can get even worse when your agent has access to TOO MUCH context for a particular task.

The truth is your coding agent has access to a boatload of input/context that will not be necessary for any given task — but still take part in the agent’s thinking process.

Every single file and folder from every segment segment of your codebase… all the frontend, all the backend, all the tests, scripts, utilities, style guides…

Even all the MCP servers you have connected will also be part of the context…

So what do you think is gonna happen when you give instructions like, “Fix the password reset bug in the API”?

Your agent is going to take every single context it has into consideration for how best to respond to you.

You were only expecting it to change 2 files in the backend, but it went ahead to change 27 files all over the place (“Oh this vibe coding thing is such a scam, i knew it”)

Because you gave it the full responsibility of figuring out what exactly what you thinking. Figuring out the precise locations you wanted changes to be made in. Essential, reading your mind — when all it gave it was a painfully vague instruction.

And while it can do that a decent amount of the time, other times it fails miserably. “Miserably” as far what you were expecting is concerned.

And this is exactly what this new Skills feature from Google is trying to solve.

Skills let you finally give structure to the agent — you can now specify a high-level series of tasks the agent should perform in response to certain kinds of prompts.

Instead of using all the context and input all the time, the agent processes only the context relevant to the task at hand.

It can still intelligently decide how to make changes to your codebase — but only withing the framework and constraints you’ve provided with Skills.

And this is the major breakthrough.

What a Skill actually is

A Skill is just a small folder that defines how a certain kind of task should be done.

At the center of that folder is a file called SKILL.md. Around it, you can optionally include:

  • scripts the agent can run,
  • templates it should follow,
  • reference docs it can consult,
  • static assets it might need.

You can scope Skills:

  • per project (rules for this repo only),
  • or globally (rules that follow you everywhere).

That means you can encode “how we do things here” once, instead of re-explaining it every time.

The key idea: Skills load only when needed

This is the part that actually makes things more reliable.

Antigravity doesn’t shove every Skill into the model’s context up front. Instead, it keeps a lightweight index of what Skills exist, and only loads the full instructions when your request matches.

So if you ask to:

  • commit code → commit rules load
  • fix a bug → bug-fix workflow loads
  • change a schema → safety rules load

Everything else stays out of the way.

Less noise. Less confusion. Fewer “creative interpretations” where you didn’t want any.

What goes inside SKILL.md

A Skill has two layers:

1) The trigger

At the top is a short description that says when this Skill should be used.
This is what Antigravity matches against your request.

2) The playbook

The rest is pure instruction:

  • step-by-step workflows
  • constraints (“don’t touch unrelated files”)
  • formats (“output a PR summary like this”)
  • safety rules

When the Skill activates, this playbook is injected into context and followed explicitly.

Another powerful example: commit messages that stop being garbage

Imagine a Skill whose entire job is to handle commits.

Instead of:

“Commit these changes (and please follow our style)”

You encode:

  • allowed commit types
  • subject length limits
  • required “why” explanations
  • forbidden vague messages

Now whenever you say:

“Commit this”

The agent doesn’t improvise.
It follows the rules.

Same input.
Same standards.
Every time.

That’s reliability.

3 important ways

Skills improve reliability in three important ways.

1. They turn tribal knowledge into enforcement

Instead of hoping the agent remembers how your team works, you encode it.

2. They can delegate to real scripts

For things that shouldn’t rely on judgment — tests, validation, formatting — a Skill can call actual scripts and report results. That’s deterministic behavior, not vibes.

3. They narrow the decision space

A tightly scoped Skill reduces guesswork. The agent is less likely to invent a workflow when you’ve already defined one.

This new MCP server from Google just changed everything for app developers

Wow this new MCP server from Google is going to change a whole lot for app developers.

Your apps are about to become so much more of something your user’s actually care to use.

You’ll finally be able to effortlessly understand your users without having to waste time hopelessly going through mountains of Analytics data.

Once you set up the new official Google Analytics MCP server, you’ll be able to ask the AI intuitive, human-friendly questions:

  • “Which acquisition channel brings users who actually retain?”
  • “Did onboarding improve after the last release? Show me conversion by platform”

And it’ll answer using the massive amount of data sitting inside your analytics.

No more surfing through event tables and wasting time trying to interpret what numbers mean for your product. You just ask the AI exactly what you want to know.

Analytics becomes a seamless part of your workflow.

Don’t ignore this.

This is the first-class, Google-supported MCP (Model Context Protocol) server for Google Analytics.

MCP is now the standard way for an AI tool (like Gemini) to connect to external systems through a set of structured “tools.”

Instead of the model guessing from vibes, the AI can call real functions like “list my GA properties” or “run a report for the last 28 days,” get actual results back, and then reason on top of those results.

So think of the Google Analytics MCP server as a bridge:

  • Your AI agent on one side
  • Your GA4 data on the other side
  • A clean tool interface in the middle

What can it do?

Under the hood, it uses the Google Analytics APIs (Admin for account/property info, Data API for reporting). In practical terms, it gives your AI the ability to:

  • list the accounts and GA4 properties you have access to
  • fetch details about a specific property
  • check things like Google Ads links (where relevant)
  • run normal GA4 reports (dimensions, metrics, date ranges, filters)
  • run realtime reports
  • read your custom dimensions and custom metrics, so it understands your schema

Also important: it’s read-only. It’s built for pulling data and analyzing it, not for changing your Analytics configuration.

A game changer

A big reason many people don’t use analytics deeply isn’t because they don’t care.

It’s because it’s slow, complex and annoying.

You open GA → you click around → you find a chart → it doesn’t answer the real question → you add a dimension → now it’s messy → you export → you still need to interpret it in the context of your app.

With MCP, you can move closer to the way you actually think:

  • “Did onboarding improve after the last release? Show me conversion by platform.”
  • “What events tend to happen right before users churn?”
  • “Which acquisition channel brings users who actually retain?”
  • “What changed this week, and what’s the most likely cause?”

That’s what makes this feel different. It’s not “analytics in chat” as a gimmick — it’s analytics as a fast feedback loop.

High-level setup

The official path is basically:

  1. enable the relevant Analytics APIs in a Google Cloud project
  2. authenticate using Google’s recommended credentials flow with read-only access
  3. add the server to your Gemini MCP config so your agent can discover and call the tools

After that, your agent can list properties, run reports, and answer questions grounded in your real GA4 data.

This isn’t just a nicer interface for analytics—it’s a fundamental shift in how you build products people actually want to use. When your data becomes something you can ask instead of hunt, you make better decisions faster, and your app becomes something users genuinely love spending time in.

A real difference maker.

10 incredible AI tools for software development

10 incredible AI tools to completely transform your software development.

Design, coding, terminal work, testing, deployment… every part of your workflow moves faster than ever.

Idea in — Product out.

1. UI Design: Figma Make

This is where the speed really starts.

Instead of staring at a blank frame and dragging rectangles for 45 minutes, you just describe what you want.

Dashboard, landing page, onboarding flow—boom, it’s there. And the best part: It’s still normal Figma. You can tweak spacing, colors, components, whatever.

No locked-in AI mockups. Just fast first drafts that you refine and move on from.

2. IDE + Agent: Windsurf (multi-agent mode)

Windsurf makes you feel like you actually have teammates.

The built-in agent (Cascade) understands your project, makes multi-file changes, and doesn’t freak out the moment something gets complicated. Then you turn on multi-agent mode and suddenly:

  • One agent is handling frontend components
  • Another is building out your backend and Firestore models
  • A third is wiring up auth, payments, edge cases

You’re not typing every line anymore. You’re reviewing, nudging, and making high-level decisions.

Pro tip: drop an AGENTS.md file into your repo that explains how you like things done. Folder conventions, error handling patterns, naming rules. After that, the agents stop guessing and start working the way you actually build.

3. Terminal Intelligence: Gemini CLI

This is your “do stuff for me” layer.

Gemini CLI lives in your terminal and acts like a real assistant instead of just a chat window. You can ask it to:

  • Scan your repo and suggest a refactor
  • Generate tests and run them
  • Fix broken builds
  • Migrate files or rewrite APIs

Basically: all the annoying glue work that usually eats your time? Hand it off.

4. Payments: Stripe

Still undefeated.

Stripe just works. Clean APIs, great docs, predictable patterns. Subscriptions, one-time payments, webhooks, retries—it’s all there and battle-tested.

In this stack, your agent handles most of the setup, you review the flow, and suddenly your app can take money without you building a payment system from scratch.

Which, let’s be honest, you never wanted to do anyway.

5. Database: Firestore

Firestore is perfect for shipping fast.

Document-based, flexible, and easy to evolve as your product changes. You don’t need to design the “perfect schema” on day one. You model what you need now, and refactor later when the shape of your product is clearer.

Great for user profiles, app state, feature data, and basically anything that isn’t hardcore analytics or financial reporting.

6. Error Handling: Sentry

If you’re moving fast, things will break. The question is whether you see it immediately or hear about it from a user three days later.

Sentry gives you:

  • Stack traces
  • Performance issues
  • Context about what users were doing when things blew up

So when your agents ship something that’s 99% right, you catch the 1% before it becomes a nightmare.

7. Hosting: Fly.io

Fly is “just deploy” hosting.

You push, it builds, it runs. No endless config rituals. No wrestling with infrastructure when you just want your app live.

This pairs insanely well with an AI-driven workflow: small changes, frequent deploys, fast feedback, easy rollbacks.

Ship, break, fix, repeat.

8. Analytics: Google Analytics

You can’t improve what you don’t measure.

GA4 gives you event tracking, funnels, and user behavior without much setup. Once it’s wired in, your features stop being “I think this is useful” and start being “users actually clicked this 3× more than the old version.”

Your agents build features. Analytics tells you whether they were worth building.

9. Security: Auth0

Auth is one of those things you either do right… or regret forever.

Auth0 handles login, tokens, permissions, OAuth, all the boring but critical stuff. You get solid security without reinventing authentication.

Which means you can focus on product logic instead of spending two weeks on password resets and token refresh bugs.

10. Testing & Vetting: TestSprite

This is your final safety net.

After your agents implement features, TestSprite helps generate and run tests, especially end-to-end flows. “Can a user sign up, pay, and reach the dashboard?” That kind of real-world check.

Combine it with Sentry:

  • Tests catch what breaks immediately
  • Sentry catches what still slips through

That’s how you move fast without lighting production on fire.

This new Rust IDE is an absolute game changer

Woah this is huge.

This new Zed IDE is absolutely incredible — the speed and flexibility is crazy.

Built with RUST — do you have any idea what that means?!

This is not yet another painfully boring VS Code fork… this is serious business.

As far as performance goes there is no way in hell VS Code is going to have any chance against this.

Even the way the AI works is quite different from what we’re used to:

Look at how we easily added the precise context we needed — but just wait and see what we do next after this…

Now we send messages to the AI — but this is not like your regular Copilot or Cursor stuff…

We are not even making changes directly, we are creating context…

Context that we can then use with the inline AI — like telling to apply specific sections of what the AI said as changes:

Blazing fast:

This is an open-source IDE where performance, multiplayer, and agent workflows are first-class infrastructure.

Available everywhere you are.

Lightning-fast agentic support from the ground up

We’re already know about agentic coding — but what Zed gets right is where those agents live.

Instead of bouncing between your editor, a terminal, and three side tools, Zed gives you a native Agent Panel inside the editor. Agents can reason over your codebase, propose multi-file changes, and walk you through decisions in the same place you’re already editing.

Even better: Zed isn’t trying to lock you into one model or one workflow. It’s built to plug into the agent tools you already use. If you’re running terminal agents, Zed can talk to them directly. If you’re building toolchains around MCP servers, Zed already speaks that language. The editor becomes the hub where humans and agents actually collaborate instead of taking turns.

This is what makes it different. Not “AI in the editor,” but the editor designed around AI as a teammate.

Speed is the architecture, not the slogan

Zed was built in Rust with a hard focus on performance. It uses modern rendering and multi-core processing so the UI stays fluid even on big projects. No input lag. No “why did my editor freeze” moments. It feels lightweight in a way most editors stopped feeling years ago.

And when Zed came to Windows, they didn’t ship a half-baked port. They implemented a real rendering backend, deep WSL integration, and made Windows a first-class platform. That’s not checkbox compatibility—that’s engineering discipline.

If you care about flow state, this matters. The editor disappears. You move faster. You think more clearly. You ship more.

Serious business

Zed isn’t a weekend hacker tool. It’s backed by serious funding and built by people who’ve already shaped the modern editor ecosystem. That matters because it means long-term velocity: features land quickly, architectural bets actually get executed, and the product has a direction.

And that direction is clear: real-time collaboration between developers and agents inside the editor itself.

Not “connect to an AI.”
Not “paste code into a chat.”
But a shared workspace where humans and machines build together.

That’s the future Zed is aiming at—and it’s already usable today.

No-brainer pricing

Zed’s pricing is designed to get you in fast. The editor is free. The Pro plan is cheap, includes unlimited accepted AI edits, and gives you token credits for agent usage. There’s also a trial.

Translation: you don’t need a procurement meeting or a long debate. You can just install it and try your real workflow.

Which is exactly what they want you to do.

Definitely worth switching to

Zed is what happens when someone rebuilds the editor around modern realities instead of layering them on top of 2015-era assumptions.

You get:

  • A UI that feels instant.
  • Multiplayer and collaboration that are native.
  • Agents that live inside your workflow instead of beside it.
  • An architecture that respects performance and scale.
  • An open ecosystem that doesn’t lock you into one model, one vendor, or one style of “AI coding.”

If you’re already doing serious agent-driven development, Zed doesn’t ask you to change how you think—it finally gives you an environment that matches how you already work.

How to use Gemini CLI and blast ahead of 99% of developers

You’re missing out big time if you’re still ignoring this incredible tool.

There’s so much it can do for you — but many devs aren’t even using it anywhere close to it’s fullest potential.

If you’ve ever wished your terminal could think with you — plan, code, search, even interact with GitHub — that’s exactly what Gemini CLI does.

It’s Google’s command-line tool that brings Gemini right into your shell.

You type, it acts. You ask, it plans. And it works with all your favorite tools — including being powered with the same tech behind the incredible Gemini Code Assist:

It’s ChatGPT for your command line — but with more power under the hood.

A massive selling point has been the MCP servers — acting as overpowered plugins for Gemini CLI.

Hook it up to GitHub, a database, or your own API, and suddenly you’re talking to your tools in plain English. Want to open an issue, query a database, or run a script? Just ask.

How to get started fast

Just:

JavaScript
npm install -g @google/gemini-cli gemini

You’ll be asked to sign in with your Google account the first time. Pick a theme, authenticate:

And you’re in:

Talking to Gemini CLI

There are two ways to use it:

  • Interactive mode — just run gemini and chat away like you’re in a terminal-native chat app.
  • Non-interactive mode — pass your prompt as a flag, like gemini -p “Write a Python script to…”. Perfect for scripts or quick tasks.

Either way, Gemini CLI can do more than just text. It can:

  • Read and write files in your current directory.
  • Search the web.
  • Run shell commands (with your permission).

The secret sauce

Here’s where it gets exciting. MCP (Model Context Protocol) servers are like power-ups. Add one for GitHub and you can:

  • Clone a repo.
  • Create or comment on issues.
  • Push changes.

Add one for your database or your docs, and you can query data, summarize PDFs, or pull in reference material without leaving the CLI.

All you do is configure the server in your settings.json file. Gemini CLI then discovers the tools and lets you use them in natural language.

Give Gemini a memory with GEMINI.md

Create a GEMINI.md in your project and drop in your project’s “personality.” It can be as simple as:

Always respond in Markdown.
Plan before coding.
Use React and Tailwind for UI.

Use Yarn for NPM package installs

Next time you run Gemini CLI, it will follow those rules automatically. You can check what memory it’s using with /memory show.

Slash commands = Instant prompts

If you do the same thing a lot — like planning features or explaining code — you can create a custom slash command.

Make a small TOML file in .gemini/commands/ like this:

description = “Generate a plan for a new feature”
prompt = “Create a step-by-step plan for {{args}}”

Then in Gemini CLI just type:

/plan user authentication system

And boom — instant output.

Real-world examples

Here’s how people actually use Gemini CLI:

  • Code with context — ask it to plan, generate, or explain your codebase.
  • Automate file ops — have it sort your downloads, summarize PDFs, or extract data.
  • Work with GitHub — open issues, review PRs, push updates via natural language.
  • Query your data — connect a database MCP server and ask questions like a human.

Safety first

Gemini CLI can run shell commands and write files, but it always asks first. You can allow once, always, or deny. It’s like having a careful assistant who double-checks before doing anything risky.

Gemini CLI isn’t just another AI interface. It’s a workbench where you blend AI with your existing workflows. Instead of hopping between browser tabs, APIs, and terminals, you get one cohesive space where you talk and it acts.

Once you add MCP servers, GEMINI.md context, and slash commands, it starts to feel less like a tool and more like a teammate who lives in your terminal.

The myth of the AI coding bubble

Hundreds of millions of dollars.

Every single month.

And not from desperate investors or idle hobbyists — from real developers.

From serious companies that mean business.

AI coding, an industry inching closer to the $10 billion figure, by each passing year.

There are still devs on Reddit who think they are too good for AI — but the biggest tech giants now have AI writing major portions of their codebase — and they are not going back.

Google now has AI writing as much as 50% of their codebase.

We have 20 million GitHub Copilot users.

GitHub Copilot Enterprise customers increased 75% quarter over quarter as companies tailor Copilot to their own codebases.

And 90% of the Fortune 100 now use GitHub Copilot.

Microsoft Fiscal Year 2025 Fourth Quarter Earnings Conference Call

In this article, we’re going to go through why this wave of AI tooling is fundamentally different—starting with the one thing bubbles never have: real money.

The money is real (and it’s massive)

In a real bubble, companies burn billions with no path to profit. Yet AI coding tools are already printing money. Microsoft’s latest reports show that GitHub’s Annual Recurring Revenue (ARR) has crossed the $2 billion mark. GitHub Copilot alone accounts for over 40% of GitHub’s total revenue growth.

This isn’t a “pilot program” or a free beta; this is a product that millions of developers and thousands of companies are paying for because it delivers immediate, measurable value.

In fact, Satya Nadella recently noted that Copilot is already a larger business than the entirety of GitHub was when Microsoft acquired it in 2018.

“Just a toy”

The “it’s just a toy” argument dies when you look at who is actually using these tools. This isn’t just for hobbyists or “vibe coders” building weekend projects. According to Microsoft’s 2025 earnings data, over 90% of the Fortune 100 are now using GitHub Copilot.

When companies like Goldman Sachs, Ford, and P&G integrate a tool into their core engineering workflow, they aren’t chasing a trend—they’re chasing efficiency. They’ve done the math. If an engineer costing $200k a year becomes even 20% more productive, the $20-per-month subscription isn’t an expense; it’s the highest ROI investment the company has ever made.

StackOverflow

If you want to see the “bubble” argument fall apart, look at the casualties of this revolution. We are witnessing the Stack Overflow collapse. For a decade, the standard workflow was: Encounter bug → Google error → Find Stack Overflow thread → Copy/Paste.

That era is over. Recent data shows that Stack Overflow traffic has plummeted, with the rate of new questions dropping by a factor of 10. Why? Because developers no longer need to wait for a human to answer their question in three hours when an AI can solve it in three seconds. This shift in developer behavior is permanent. You don’t “un-learn” that level of speed.

The speed of human thought

The most profound reason this isn’t a bubble is philosophical but practical: AI increases the speed of human thought actualizing itself in software. Historically, the bottleneck of software was the “syntax tax.” You had a great idea, but you had to spend hours wrestling with boilerplate, configuration, and documentation. AI removes that friction. It allows a developer to stay in “the flow,” moving from concept to execution at the speed of thought.

We aren’t just writing code faster; we are thinking bigger. When the “cost” of trying a new feature or refactoring a messy codebase drops to near zero, innovation explodes.

The dot-com bubble burst because the internet wasn’t ready for the promises being made. In 2026, the AI coding revolution is different: the infrastructure is here, the revenue is proven, and the productivity gains are undeniable.

This isn’t a bubble. It’s the end of the “typing” era of software engineering and the beginning of the “architecting” era. If you’re waiting for the pop, you’re just going to get left behind.

10 must-have AI coding tools for software developers in 2026

Don’t tell me all you know how to use is Copilot or Cursor.

Oh wait, you’re even telling me that generating code is all you use AI for?

Honestly I’m shaking my head at how much potential you’ve been wasting as a software developer.

Don’t you know there’s so much more to AI coding than just sending prompts to an agent and waiting for code to drop from the sky.

1) v0

Do you see what I’m talking about?

There’s even AI for the stage before you even think of writing a single line of code.

Use v0 to turn all your amazing ideas into working designs with remarkable efficiency, especially on the frontend.

Why it matters

  • Generates complete, usable UI and app scaffolding
  • Can push code directly into real projects, not just prototypes
  • Strong alignment with modern frontend patterns and component libraries

Best for

  • Rapid UI development, internal tools, dashboards, and early product versions

2) Qodo

Qodo focuses on improving code quality rather than just generating more code.

Why it matters

  • Acts as an AI-powered reviewer across IDEs and pull requests
  • Encourages consistent standards and better engineering discipline
  • Scales review quality across teams and repositories

Best for

  • Teams that want fewer regressions and stronger code governance

3) Google Stitch

Stitch sits at the intersection of design and development, transforming ideas and visuals into usable UI and code.

Why it matters

  • Converts text prompts and images into UI layouts and frontend code
  • Bridges design and engineering workflows smoothly
  • Speeds up iteration between concepts and implementation

Best for

  • Frontend developers working closely with designers
  • Teams exploring multiple UI directions quickly

4) Multi-agent mode + Windsurf

Already a new era of AI coding.

Thanks to recent upgrades, IDEs like Windsurf now let you have multiple coding agents working on your codebase — at the same time.

You can add several features at once — and also fix bugs while you’re at it.

Your very own army of developers working together to build something incredible.

Why it matters

  • Handles multi-file and repo-wide changes naturally
  • Supports multiple agents working in parallel on the same codebase
  • Integrates planning, execution, and review inside the editor

Best for

  • Large refactors, new features, complex debugging, and coordinated development tasks

5) Google Antigravity (with artifacts)

By far the most standout feature of Google Antigravity.

Artifacts — it’s a new way coding agents communicate the process they used to make changes for you.

Screenshots, recordings, step-by-step checklists… artifacts let you know exactly what happened in the most intuitive way possible.

For example, look at the video Antigravity created when testing the web app I told it to create:

Antigravity focuses on agent orchestration and accountability rather than just code generation.

Why it matters

  • Lets you dispatch multiple agents for long-running or complex tasks
  • Produces artifacts like plans, diffs, logs, and walkthroughs for review
  • Emphasizes transparency and safety in autonomous workflows

Best for

  • Complex coding, multi-step fixes, and tasks that require traceability

6) Claude Code

Claude Code brings agentic coding directly into the terminal, fitting naturally into existing developer habits.

Why it matters

  • Works directly inside real repositories
  • Handles planning, implementation, and explanation in one flow
  • Ideal for developers who live in the CLI

Best for

  • Terminal-first workflows, scripting, and repo-wide reasoning

7) Gemini CLI

Gemini CLI is a terminal-based AI agent designed for structured problem solving and tool use.

Why it matters

  • Can reason through tasks step by step
  • Interacts with files, shell commands, and external tools
  • Extensible through custom integrations

Best for

  • Automating repetitive tasks
  • Exploring unfamiliar codebases quickly

8) Testim

Testim uses AI to make automated testing faster to create and easier to maintain.

Why it matters

  • Generates tests from high-level descriptions
  • Reduces flaky tests and maintenance overhead
  • Adapts better to UI changes than traditional test frameworks

Best for

  • Frontend-heavy applications
  • Teams struggling with brittle end-to-end tests

9) Snyk AI

Snyk AI brings security directly into the AI-driven development loop.

Why it matters

  • Automatically suggests fixes for vulnerabilities
  • Fits naturally into pull request and CI workflows
  • Helps teams keep up with security as development speeds increase

Best for

  • Organizations shipping quickly without compromising security

10) Mintlify

In 2026, documentation is part of the product. Mintlify makes it easier to keep docs current, readable, and useful.

Why it matters

  • Designed for modern developer documentation workflows
  • Supports fast authoring and clean presentation
  • Makes docs more usable for both humans and AI tools

Best for

  • API documentation, platform docs, and internal knowledge bases

AI isn’t here to type faster—it’s here to expand how you think, design, collaborate, review, ship, and own the entire lifecycle of what you build.

The real leverage comes when you let AI shape ideas, decisions, quality, speed, and trust—before, during, and after the code ever exists.

20 free & open-source tools to completely destroy your SaaS bills

SaaS is everywhere. Subscription costs add up fast. Open-source offers a powerful solution. These tools provide control and savings. Let’s explore 20 options to cut your SaaS expenses.

1. Supabase

It’s an open-source Firebase alternative. Build and scale easily.

Key Features:

  • Managed PostgreSQL Database: Reliable and less operational hassle.
  • Realtime Database: Live data for interactive apps.
  • Authentication and Authorization: Secure user management built-in.
  • Auto-generated APIs: Faster development from your database.

2. PocketBase

A lightweight, all-in-one backend. Setup is incredibly simple.

Key Features:

  • Single Binary Deployment: Easy to deploy.
  • Built-in SQLite Database: Fast and no extra install.
  • Realtime Subscriptions: Reactive UIs are simple.
  • Admin Dashboard: Manage data visually.

3. Dokku

Your own mini-Heroku. Deploy apps easily on your servers.

Key Features:

  • Git-Based Deployments: Deploy with a Git push.
  • Plugin Ecosystem: Extend functionality easily.
  • Docker-Powered: Consistent environments.
  • Scalability: Scale your apps horizontally.

4. Airbyte

Open-source data integration. Move data between many sources.

Key Features:

  • Extensive Connector Library: Connect to hundreds of sources.
  • User-Friendly UI: Easy pipeline configuration.
  • Customizable Connectors: Build your own if needed.
  • ELT Support: Simple to complex data movement.

5. Appwrite

A self-hosted backend-as-a-service. Build scalable apps with ease.

Key Features:

  • Database and Storage: Secure data and file management.
  • Authentication and Authorization: Robust user access control.
  • Serverless Functions: Run backend code without servers.
  • Realtime Capabilities: Build interactive features.

6. Ory Kratos

Open-source identity management. Security and developer focus.

Key Features:

  • Multi-Factor Authentication (MFA): Enhanced security for users.
  • Passwordless Authentication: Modern login options.
  • Identity Federation: Integrate with other identity systems.
  • Flexible User Schemas: Customize user profiles.

7. Plane

Open-source project management. Clarity and team collaboration.

Key Features:

  • Issue Tracking: Manage tasks and bugs effectively.
  • Project Planning: Visualize timelines and sprints.
  • Collaboration Features: Easy team communication.
  • Customizable Workflows: Adapt to your processes.

8. Coolify

A self-hosted PaaS alternative. Simple deployment of web apps.

Key Features:

  • Simplified Deployment: Deploy with a few clicks.
  • Automatic SSL Certificates: Free SSL via Let’s Encrypt.
  • Resource Management: Monitor and scale resources.
  • Support for Multiple Application Types: Versatile deployment.

9. n8n

Free, open-source workflow automation. Connect apps visually.

Key Features:

  • Node-Based Visual Editor: Design workflows easily.
  • Extensive Integration Library: Connect to many services.
  • Customizable Nodes: Integrate with anything.
  • Self-Hostable: Full data control.

10. LLMWare

Build LLM-powered applications. Open-source tools and frameworks.

Key Features:

  • Prompt Management: Organize and test prompts.
  • Data Ingestion and Indexing: Prepare data for LLMs.
  • Retrieval Augmented Generation (RAG): Ground LLM responses.
  • Deployment Options: Flexible deployment choices.

11. LangchainJS

JavaScript framework for language models. Build complex applications.

Key Features:

  • Modular Architecture: Use individual components.
  • Integration with Multiple LLMs: Supports various providers.
  • Pre-built Chains and Agents: Ready-to-use logic.
  • Flexibility and Extensibility: Customize the framework.

12. Trieve

Open-source vector database. Efficient semantic search.

Key Features:

  • Efficient Vector Storage and Retrieval: Fast similarity search.
  • Multiple Distance Metrics: Optimize search accuracy.
  • Metadata Filtering: Refine search results.
  • Scalability: Handles large datasets.

13. Affine

Open-source knowledge base and project tool. Notion and Jira combined.

Key Features:

  • Block-Based Editor: Flexible content creation.
  • Database Functionality: Structured information management.
  • Project Management Features: Task and progress tracking.
  • Interlinking and Backlinks: Connect your knowledge.

14. Hanko

Open-source passwordless authentication. Secure and user-friendly.

Key Features:

  • Passwordless Authentication: Secure logins without passwords.
  • WebAuthn Support: Industry-standard security.
  • User Management: Easy account and key management.
  • Developer-Friendly APIs: Simple integration.

15. Taubyte

Open-source edge computing platform. Run apps closer to users.

Key Features:

  • Decentralized Deployment: Deploy across edge nodes.
  • Serverless Functions at the Edge: Low-latency execution.
  • Resource Optimization: Efficient resource use.
  • Scalability and Resilience: Robust and scalable apps.

16. Plausible

Lightweight, privacy-friendly web analytics. An alternative to Google.

Key Features:

  • Simple and Clean Interface: Easy-to-understand metrics.
  • Privacy-Focused: No cookies, no personal tracking.
  • Lightweight and Fast: Minimal impact on site speed.
  • Self-Hostable: Own your data.

17. Flipt

Open-source feature flags and experimentation. Safe feature rollouts.

Key Features:

  • Feature Flag Management: Control feature visibility.
  • A/B Testing: Run controlled experiments.
  • Gradual Rollouts: Release features slowly.
  • User Targeting: Target specific user groups.

18. PostHog

Open-source product analytics. Understand user behavior.

Key Features:

  • Event Tracking: Capture user interactions.
  • Session Recording: See how users behave.
  • Feature Flags: Integrated feature control.
  • A/B Testing: Experiment and analyze.

19. Logto

Open-source authentication and authorization. Modern app security.

Key Features:

  • Flexible Authentication Methods: Various login options.
  • Fine-Grained Authorization: Granular access control.
  • User Management: Easy user and permission management.
  • Developer-Friendly SDKs: Simple integration.

20. NocoDB

Open-source no-code platform. Turn databases into spreadsheets.

Key Features:

  • Spreadsheet-like Interface: Familiar data interaction.
  • API Generation: Automatic REST and GraphQL APIs.
  • Form Builders: Create custom data entry forms.
  • Collaboration Features: Teamwork on data and apps.

The open-source world offers great SaaS alternatives. You can cut costs and gain control. Explore these tools and free yourself from high SaaS bills. Take charge of your software stack.

Google just made Gemini CLI even more powerful for coding

You’re not asking for code.

You’re asking for code that fits.

Fits the architecture, the conventions, the product intent, your team’s taste.

And this is the problem with chat-centric AI workflows. they make the most important information temporary. The constraints live in a scroll.

Each new session quietly resets the context, and you end up restating rules that already should exist somewhere. Not because the tools can’t follow direction—but because the direction itself has no permanent home.

And that’s what Conductor is here to solve.

Conductor’s bet is simple: pin the context and plan as standalone artifacts in the codebase, so the implementation keeps snapping back to the same center.

Instead of keeping your project’s “truth” trapped inside a chat thread, Conductor puts it where it naturally belongs: inside your repo, as living Markdown files—the kind you can read, edit, commit, and share with your team.

And once that’s in place, the workflow changes in a powerful way.

The idea: context-driven development

Conductor is a preview extension for Gemini CLI that introduces what Google calls context-driven development. The principle is simple:

If you want consistent output, stop treating context like a one-time prompt… and start treating it like a maintained asset.

So Conductor scaffolds a small “brain” inside your repository—documents that define things like:

  • what you’re building (product intent)
  • how you build here (workflow + conventions)
  • what tools and frameworks matter (tech stack)
  • what “good code” looks like in this project (style guides)

Think about the last time you joined a new codebase. The hardest part wasn’t typing code. It was absorbing the unwritten rules. Conductor’s goal is to make those rules written—and keep them close to where the work happens.

The workflow: three moves, no drama

Conductor is built around a short loop and you’ll feel it fast.

1) /conductor:setup — plant the roots

This command creates the baseline context docs in your repo. It’s basically Conductor saying: “Cool. Let’s make the project’s standards explicit.”

This is where you capture the stuff you normally repeat:

  • architecture expectations
  • repo conventions
  • testing preferences
  • coding style decisions
  • product boundaries

Once it’s there, it’s there.

2) /conductor:newTrack — turn “we should build X” into a real artifact

Conductor organizes work into tracks (features or bug fixes). When you create a new track, it generates two key files:

  • spec.md — what you want, and why it matters
  • plan.md — the step-by-step path to get there (phases, tasks, checklists)

This is the moment where things get interesting.

Because now you’re not just “asking for code.” You’re shaping intent in a way that’s reviewable. Editable. Shareable.

Quick micro-commitment: think about the last feature you built. Did you have a clear plan written down before you started? Or did the plan mostly live in your head?

3) /conductor:implement — build from the plan, task by task

Once the plan looks right, you run implement. The agent works through plan.md, checking items off as it goes, and updating progress so you can stop and resume without losing the thread.

That’s the real win: the plan isn’t just a prelude. It becomes the backbone of execution.

The extra pieces that make it feel “team-ready”

Two small commands add a lot of confidence to the flow:

  • /conductor:status gives you a clear view of what’s in motion and what’s done.
  • /conductor:revert helps roll back changes in a way that maps to the work itself (tracks/tasks), not just “some commits somewhere.”

If you’ve ever wanted AI-assisted work to feel more like a well-run project and less like a one-off session, those details matter.

Why this clicks, especially on real codebases

Conductor isn’t trying to replace your engineering judgment. It’s trying to encode it.

And once your standards live as files in the repo, something subtle happens: your codebase stops being a thing you explain… and starts being a thing you extend.

Next time you want to build something with AI —anything—don’t just start with code.

Start with one track. One spec. One plan.

Then watch how much calmer the build feels when the work has a spine.