featured

5 tricks to make Claude Code go 10x crazy (amateur vs pro devs)

Garbage in garbage out.

Many developers treat Claude Code like it’s supposed to magically read their minds — and then they get furious when it gives weak results.

Oh add this feature, oh fix that bug for me… just do it Claudy, I don’t care if I give you a miserably vague, low-quality prompt — I still expect the best Claudy.

And if you can’t give me what I asked for — then of course you’re worthless and vibe coding is total hype BS.

They just have no clue how to drive this tool.

Amateurs ask for code.

Pros ask for outcomes, constraints, trade-offs, and a plan.

They give Claude Code enough context to behave like a senior engineer who can reason, sequence work, and protect the codebase from subtle failure.

1) Use “think mode” for complex problems

If your prompt sounds like a simple low-effort task, Claude Code will give you… a simple low-effort solution.

If your prompt signals “this is a thinking problem”, you’ll get a completely different quality of output: constraints, risks, alternatives, and a step-by-step implementation plan.

Amateur prompt

Add authentication to my app.

Pro prompt (the “think” unlock)

I need you to think through a secure, maintainable authentication design for a React frontend with a Node/Express API. Compare cookie sessions vs JWT, include password hashing strategy, rate limiting, CSRF considerations, refresh-token handling (if relevant), and how this should fit our existing user model. Then propose an implementation plan with milestones and tests.

Why this works: you’re explicitly asking for architecture + trade-offs + sequencing, not “spit out code.”

Extra pro tip: add “assumptions” + “unknowns” to force clarity:

List assumptions you’re making, and ask me the minimum questions needed if something is missing.

2) Connect Claude Code to the world

Stop wasting the Claude Code’s potential — use MCP to connect it to external tools including databases and developer APIs.

Pros don’t keep Claude Code hopelessly relegated to just writing code.

They extend it with tools so it can inspect your environment and act with real context.

Project-scoped MCP config means: everyone on the team shares the same Ctoolbelt, checked into the repo. New dev joins? They pull the project and Claude Code instantly knows how to access the same tools.

What this unlocks

  • “Look at our database schema and generate endpoints”
  • “Scan the repo and find all usages of X”
  • “Check deployment status and suggest a fix”
  • “Run tests, interpret failures, patch code”

Amateur approach

Here’s my schema (pastes partial schema). Make me an API.

Pro approach

Use our project MCP tools to inspect the actual schema, identify relationships, then generate a CRUD module with validation, error handling, and tests. After that, propose performance improvements based on indexes and query patterns you observe.

What changes: Claude Code stops guessing and starts integrating with your environment.

3) Stop using Git like that

Amateurs are still treating git like a sequence of memorized commands — they think it’s still 2022.

Pros treat git + PRs like an orchestrated workflow: branching, implementation, commit hygiene, PR description quality, reviewer routing, and cleanup—all expressed as intent.

Amateur behavior

  • One giant commit: “stuff”
  • Still using manual git commands: git commit, git branch, etc.
  • Vague PR description
  • No reviewer guidance

Pro command (workflow orchestration)

Create a new feature branch for adding “Sign in with Google” using OAuth2. Implement the full flow end-to-end (redirect handling, token exchange, session persistence, logout). Commit in logical chunks using our conventions (small, descriptive messages). Open a PR with a clear summary, testing notes, and security considerations, and request review from the security-focused reviewers.

Why this works: Claude Code shines when it can plan a multi-step process and keep the repo readable for humans.

4) Don’t be so naive

Amateurs build like software like naive optimists — and this shows up in both their hand-written code — and their prompts to Claude Code.

Pros build systems that keep working when reality shows up: timeouts, duplicate requests, partial failures, bad inputs, rate limits, retries, and logging that makes incidents survivable.

Claude Code is unusually strong at “paranoid engineering”—you just have to ask for it.

Amateur prompt

Make a payment function.

Pro prompt (tests first + failure modes)

Design this payment flow defensively. Start by writing tests first (including failures): network timeouts, declined cards, malformed input, duplicate submission, idempotency, provider rate limiting, and partial capture scenarios. Then implement the code to satisfy the tests. Add structured logs, clear error taxonomy, and safe fallbacks where appropriate.

If you want to push it even further:

Include a retry policy with jitter, a circuit-breaker-like safeguard, and metrics hooks so we can observe success/failure rates.

Outcome: instead of “works on my machine,” you get code that holds up under pressure.

5) Stop refactoring like a noob

Amateurs refactor locally: rename a variable, extract a function, call it done.

Pros refactor system-wide: centralize logic, enforce boundaries, update imports everywhere, adjust tests, and keep behavior consistent across the codebase.

Claude becomes terrifyingly effective when you give it a refactor target + constraints + a migration plan.

Amateur prompt

Move this function into another file.

Pro prompt (multi-file, consistent patterns)

Refactor authentication so UI components no longer contain auth logic. Create a dedicated auth module/service, route all auth-related API calls through it, standardize error handling, and update all imports across the app. Add TypeScript types/interfaces where needed. Update tests to mock the new service cleanly. Then search the repo for any leftover auth logic in utilities and migrate it too.

Why this works: you’re not asking for “a refactor.” You’re asking for a controlled architectural change with guardrails.

The real secret: pros don’t write prompts, they write specs

If you want Claude to “go 10×,” stop giving it chores and start giving it:

  • intent (“what success looks like”)
  • constraints (security, performance, conventions, compatibility)
  • context (stack, repo patterns, architecture)
  • sequencing (“plan first, then implement, then test, then cleanup”)

Vercel’s new tool just made web dev way easier for coding agents

AI coding agents are about to get a lot more reliable for web automation & development — thanks to this new tool from Vercel.

These agents do excel at code generation — but what happens when it’s time to actually test the code in a real browser, like a human or like Pupeeteer?

They’ve always struggled with being able to autonomously navigate the browser– and identify/manipulate elements in a quick and reliable way.

Flaky selectors. Bloated DOM code. Screenshots that can’t really be understood in the context of your prompts.

And this is exactly what the agent-browser tool from Vercel is here to fix.

It’s a tiny CLI on top of Playwright, but with one genuinely clever idea that makes browser control way more reliable for AI.

The killer feature: “snapshot + refs”

Instead of asking an agent to guess CSS selectors or XPath, agent-browser does this:

  1. It takes a snapshot of the page’s accessibility tree
  2. It assigns stable references like @e1, @e2, @e3 to elements
  3. Your agent clicks and types using those refs

So instead of having to guess the element you mean on its own from a simple prompt like:

“Find the blue submit button and click it”

you get:

JavaScript
agent-browser snapshot -i # - button "Sign up" [ref=e7] agent-browser click @e7

No selector guessing or brittle DOM queries.

This one design choice makes browser automation way more deterministic for agents.

Why this is actually a big deal for AI agents

1. Way less flakiness

Traditional automation breaks all the time because selectors depend on DOM structure or class names.

Refs don’t care about layout shifts or renamed CSS classes.
They point to the exact element from the snapshot the agent just saw.

That alone eliminates a huge amount of “it worked yesterday” failures.

2. Much cleaner “page understanding” for the model

Instead of dumping a massive DOM or a raw screenshot into the model context, you give it a compact, structured snapshot:

  • headings
  • inputs
  • buttons
  • links
  • roles
  • labels
  • refs

That’s a way more usable mental model for an LLM.

The agent just picks refs and issues actions.
No token explosion or weird parsing hacks.

3. It’s built for fast agent loops

agent-browser runs as a CLI + background daemon.

The first command starts a browser.
Every command after that reuses it.

So your agent can do:

act → observe → act → observe → act → observe

…without paying a cold-start tax every time.

That matters a lot once you’re running 20–100 small browser steps per task.

Great power features

These are the things that make it feel agent-native — not just another wrapper around Playwright.

Skip login flows with origin-scoped headers

You can attach headers to a specific domain:

JavaScript
agent-browser open api.example.com \ --headers '{"Authorization":"Bearer TOKEN"}'

So your agent is already authenticated when the page loads.

Even better: those headers don’t leak to other sites.
So you can safely jump between domains in one session.

This is perfect for:

  • dashboards
  • admin panels
  • internal tools
  • staging environments

Live “watch the agent browse” mode

You can stream what the browser is doing over WebSocket.

So you can literally watch your agent click around a real website in real time.

It’s incredibly useful for:

  • debugging weird agent behavior
  • demos
  • sanity-checking what the model thinks it’s doing

Where it shines the most

agent-browser is especially good for:

  • self-testing agents
    (“build the app → open it → click around → see if it broke → fix → repeat”)
  • onboarding and signup flows
  • dashboard sanity checks
  • form automation
  • E2E smoke tests driven by LLMs

It feels like it was designed for the exact “agentic dev loop” everyone’s building right now.

Claude Code finally fixed its biggest flaw — this is huge

Every single developer using Claude Code is about to get way more powerful & productive than they already are.

This new Claude Code update finally fixes a major issue that’s been negatively impacting its accuracy for several months now — and many of us were never even aware.

All this time, Claude Code has been bloating up your context in the background with unnecessary data from every single one of your MCP servers.

It didn’t matter whether you actually used them or not in any given prompt — if you have 100 MCP servers, it would dump all the complex tool definitions and metadata for all of them into your context, with no exceptions.

Drowning out context that actually matters and lowering the accuracy.

But now with new Tool Search feature in Claude Code, this problem is finally gone forever.

They’ve fixed everything — and they did it in such an amazing way — web developers would be jealous.

The old MCP experience was quietly broken

Here’s what was happening before:

  • You connect a few MCP servers
  • Each server exposes a bunch of tools
  • Claude loads all of them at startup
  • Your context window gets eaten alive
  • Tool selection gets worse as your tool list grows

So even before Claude starts thinking about your actual code, it’s already wasting tokens on tool schemas you may never use in that session.

The more “power user” you became, the worse things got.

That’s backwards.

Tool Search changes everything — with a neat trick from web dev

With Tool Search enabled, Claude Code stops doing dumb work up front.

Instead of loading everything, it does this:

  • Nothing is loaded at startup
  • Claude keeps MCP tools out of context by default
  • When a task comes up, Claude searches for relevant tools
  • Only the tools it actually needs get pulled in
  • Everything else stays out of the way

Same MCP. Same tools.
But with lazy loading: Massively better behavior.

This is exactly how modern AI tooling should work.

Why this is so huge

1. You instantly get more usable context

This is the obvious win — and it matters a lot.

Tool schemas can be massive. When you’re running multiple MCP servers, you’re talking thousands (sometimes tens of thousands) of tokens wasted on definitions alone.

Lazy loading gives that space back to:

  • real code
  • repo context
  • actual reasoning

That alone makes Claude Code feel noticeably smarter.

2. Tool selection gets better, not worse

Too many tools hurt accuracy in another crucial way:

When a model sees a huge wall of tools, it’s harder for it to consistently pick the right one. Lazy loading narrows the decision space.

Claude now:

  • searches for tools relevant to this task
  • loads a small, focused set
  • chooses more reliably

That’s not theoretical — it’s how Anthropic designed Tool Search to scale.

3. MCP finally scales the way you always wanted

Before this update, connecting more MCP servers felt risky:

“Am I about to blow up my context just by having this enabled?”

But now you can keep everything connected.

With lazy loading, unused MCP servers are basically free. They don’t cost context until Claude actually needs them.

That changes how you think about building and composing MCP ecosystems.

It turns on automatically (which is perfect)

Claude Code enables Tool Search automatically once your MCP tool definitions would take more than 10% of the context window.

That’s smart:

  • small setups stay simple
  • big setups get optimized
  • no babysitting required

Very important: This changes how MCP servers should be written

Because Claude now searches for tools instead of seeing them all at once, your MCP server descriptions actually matter.

Good servers:

  • clearly state what problems they solve
  • make it obvious when Claude should use them
  • have clean, intentional tool naming

Bad descriptions = your tools don’t get discovered.

Lazy loading turns MCP servers into discoverable “capabilities” instead of background noise.

Google just made AI coding agents more powerful than ever

This is going to have such a massive positive impact on the accuracy and reliability of AI agents in software development.

The new Skills feature in the Google Antigravity IDE finally solves the problem of AI agents giving us wildly unpredictable/inaccurate results for the same prompt.

Too little context is terrible for agent accuracy — but things can get even worse when your agent has access to TOO MUCH context for a particular task.

The truth is your coding agent has access to a boatload of input/context that will not be necessary for any given task — but still take part in the agent’s thinking process.

Every single file and folder from every segment segment of your codebase… all the frontend, all the backend, all the tests, scripts, utilities, style guides…

Even all the MCP servers you have connected will also be part of the context…

So what do you think is gonna happen when you give instructions like, “Fix the password reset bug in the API”?

Your agent is going to take every single context it has into consideration for how best to respond to you.

You were only expecting it to change 2 files in the backend, but it went ahead to change 27 files all over the place (“Oh this vibe coding thing is such a scam, i knew it”)

Because you gave it the full responsibility of figuring out what exactly what you thinking. Figuring out the precise locations you wanted changes to be made in. Essential, reading your mind — when all it gave it was a painfully vague instruction.

And while it can do that a decent amount of the time, other times it fails miserably. “Miserably” as far what you were expecting is concerned.

And this is exactly what this new Skills feature from Google is trying to solve.

Skills let you finally give structure to the agent — you can now specify a high-level series of tasks the agent should perform in response to certain kinds of prompts.

Instead of using all the context and input all the time, the agent processes only the context relevant to the task at hand.

It can still intelligently decide how to make changes to your codebase — but only withing the framework and constraints you’ve provided with Skills.

And this is the major breakthrough.

What a Skill actually is

A Skill is just a small folder that defines how a certain kind of task should be done.

At the center of that folder is a file called SKILL.md. Around it, you can optionally include:

  • scripts the agent can run,
  • templates it should follow,
  • reference docs it can consult,
  • static assets it might need.

You can scope Skills:

  • per project (rules for this repo only),
  • or globally (rules that follow you everywhere).

That means you can encode “how we do things here” once, instead of re-explaining it every time.

The key idea: Skills load only when needed

This is the part that actually makes things more reliable.

Antigravity doesn’t shove every Skill into the model’s context up front. Instead, it keeps a lightweight index of what Skills exist, and only loads the full instructions when your request matches.

So if you ask to:

  • commit code → commit rules load
  • fix a bug → bug-fix workflow loads
  • change a schema → safety rules load

Everything else stays out of the way.

Less noise. Less confusion. Fewer “creative interpretations” where you didn’t want any.

What goes inside SKILL.md

A Skill has two layers:

1) The trigger

At the top is a short description that says when this Skill should be used.
This is what Antigravity matches against your request.

2) The playbook

The rest is pure instruction:

  • step-by-step workflows
  • constraints (“don’t touch unrelated files”)
  • formats (“output a PR summary like this”)
  • safety rules

When the Skill activates, this playbook is injected into context and followed explicitly.

Another powerful example: commit messages that stop being garbage

Imagine a Skill whose entire job is to handle commits.

Instead of:

“Commit these changes (and please follow our style)”

You encode:

  • allowed commit types
  • subject length limits
  • required “why” explanations
  • forbidden vague messages

Now whenever you say:

“Commit this”

The agent doesn’t improvise.
It follows the rules.

Same input.
Same standards.
Every time.

That’s reliability.

3 important ways

Skills improve reliability in three important ways.

1. They turn tribal knowledge into enforcement

Instead of hoping the agent remembers how your team works, you encode it.

2. They can delegate to real scripts

For things that shouldn’t rely on judgment — tests, validation, formatting — a Skill can call actual scripts and report results. That’s deterministic behavior, not vibes.

3. They narrow the decision space

A tightly scoped Skill reduces guesswork. The agent is less likely to invent a workflow when you’ve already defined one.

This new MCP server from Google just changed everything for app developers

Wow this new MCP server from Google is going to change a whole lot for app developers.

Your apps are about to become so much more of something your user’s actually care to use.

You’ll finally be able to effortlessly understand your users without having to waste time hopelessly going through mountains of Analytics data.

Once you set up the new official Google Analytics MCP server, you’ll be able to ask the AI intuitive, human-friendly questions:

  • “Which acquisition channel brings users who actually retain?”
  • “Did onboarding improve after the last release? Show me conversion by platform”

And it’ll answer using the massive amount of data sitting inside your analytics.

No more surfing through event tables and wasting time trying to interpret what numbers mean for your product. You just ask the AI exactly what you want to know.

Analytics becomes a seamless part of your workflow.

Don’t ignore this.

This is the first-class, Google-supported MCP (Model Context Protocol) server for Google Analytics.

MCP is now the standard way for an AI tool (like Gemini) to connect to external systems through a set of structured “tools.”

Instead of the model guessing from vibes, the AI can call real functions like “list my GA properties” or “run a report for the last 28 days,” get actual results back, and then reason on top of those results.

So think of the Google Analytics MCP server as a bridge:

  • Your AI agent on one side
  • Your GA4 data on the other side
  • A clean tool interface in the middle

What can it do?

Under the hood, it uses the Google Analytics APIs (Admin for account/property info, Data API for reporting). In practical terms, it gives your AI the ability to:

  • list the accounts and GA4 properties you have access to
  • fetch details about a specific property
  • check things like Google Ads links (where relevant)
  • run normal GA4 reports (dimensions, metrics, date ranges, filters)
  • run realtime reports
  • read your custom dimensions and custom metrics, so it understands your schema

Also important: it’s read-only. It’s built for pulling data and analyzing it, not for changing your Analytics configuration.

A game changer

A big reason many people don’t use analytics deeply isn’t because they don’t care.

It’s because it’s slow, complex and annoying.

You open GA → you click around → you find a chart → it doesn’t answer the real question → you add a dimension → now it’s messy → you export → you still need to interpret it in the context of your app.

With MCP, you can move closer to the way you actually think:

  • “Did onboarding improve after the last release? Show me conversion by platform.”
  • “What events tend to happen right before users churn?”
  • “Which acquisition channel brings users who actually retain?”
  • “What changed this week, and what’s the most likely cause?”

That’s what makes this feel different. It’s not “analytics in chat” as a gimmick — it’s analytics as a fast feedback loop.

High-level setup

The official path is basically:

  1. enable the relevant Analytics APIs in a Google Cloud project
  2. authenticate using Google’s recommended credentials flow with read-only access
  3. add the server to your Gemini MCP config so your agent can discover and call the tools

After that, your agent can list properties, run reports, and answer questions grounded in your real GA4 data.

This isn’t just a nicer interface for analytics—it’s a fundamental shift in how you build products people actually want to use. When your data becomes something you can ask instead of hunt, you make better decisions faster, and your app becomes something users genuinely love spending time in.

A real difference maker.

10 incredible AI tools for software development

10 incredible AI tools to completely transform your software development.

Design, coding, terminal work, testing, deployment… every part of your workflow moves faster than ever.

Idea in — Product out.

1. UI Design: Figma Make

This is where the speed really starts.

Instead of staring at a blank frame and dragging rectangles for 45 minutes, you just describe what you want.

Dashboard, landing page, onboarding flow—boom, it’s there. And the best part: It’s still normal Figma. You can tweak spacing, colors, components, whatever.

No locked-in AI mockups. Just fast first drafts that you refine and move on from.

2. IDE + Agent: Windsurf (multi-agent mode)

Windsurf makes you feel like you actually have teammates.

The built-in agent (Cascade) understands your project, makes multi-file changes, and doesn’t freak out the moment something gets complicated. Then you turn on multi-agent mode and suddenly:

  • One agent is handling frontend components
  • Another is building out your backend and Firestore models
  • A third is wiring up auth, payments, edge cases

You’re not typing every line anymore. You’re reviewing, nudging, and making high-level decisions.

Pro tip: drop an AGENTS.md file into your repo that explains how you like things done. Folder conventions, error handling patterns, naming rules. After that, the agents stop guessing and start working the way you actually build.

3. Terminal Intelligence: Gemini CLI

This is your “do stuff for me” layer.

Gemini CLI lives in your terminal and acts like a real assistant instead of just a chat window. You can ask it to:

  • Scan your repo and suggest a refactor
  • Generate tests and run them
  • Fix broken builds
  • Migrate files or rewrite APIs

Basically: all the annoying glue work that usually eats your time? Hand it off.

4. Payments: Stripe

Still undefeated.

Stripe just works. Clean APIs, great docs, predictable patterns. Subscriptions, one-time payments, webhooks, retries—it’s all there and battle-tested.

In this stack, your agent handles most of the setup, you review the flow, and suddenly your app can take money without you building a payment system from scratch.

Which, let’s be honest, you never wanted to do anyway.

5. Database: Firestore

Firestore is perfect for shipping fast.

Document-based, flexible, and easy to evolve as your product changes. You don’t need to design the “perfect schema” on day one. You model what you need now, and refactor later when the shape of your product is clearer.

Great for user profiles, app state, feature data, and basically anything that isn’t hardcore analytics or financial reporting.

6. Error Handling: Sentry

If you’re moving fast, things will break. The question is whether you see it immediately or hear about it from a user three days later.

Sentry gives you:

  • Stack traces
  • Performance issues
  • Context about what users were doing when things blew up

So when your agents ship something that’s 99% right, you catch the 1% before it becomes a nightmare.

7. Hosting: Fly.io

Fly is “just deploy” hosting.

You push, it builds, it runs. No endless config rituals. No wrestling with infrastructure when you just want your app live.

This pairs insanely well with an AI-driven workflow: small changes, frequent deploys, fast feedback, easy rollbacks.

Ship, break, fix, repeat.

8. Analytics: Google Analytics

You can’t improve what you don’t measure.

GA4 gives you event tracking, funnels, and user behavior without much setup. Once it’s wired in, your features stop being “I think this is useful” and start being “users actually clicked this 3× more than the old version.”

Your agents build features. Analytics tells you whether they were worth building.

9. Security: Auth0

Auth is one of those things you either do right… or regret forever.

Auth0 handles login, tokens, permissions, OAuth, all the boring but critical stuff. You get solid security without reinventing authentication.

Which means you can focus on product logic instead of spending two weeks on password resets and token refresh bugs.

10. Testing & Vetting: TestSprite

This is your final safety net.

After your agents implement features, TestSprite helps generate and run tests, especially end-to-end flows. “Can a user sign up, pay, and reach the dashboard?” That kind of real-world check.

Combine it with Sentry:

  • Tests catch what breaks immediately
  • Sentry catches what still slips through

That’s how you move fast without lighting production on fire.

This new Rust IDE is an absolute game changer

Woah this is huge.

This new Zed IDE is absolutely incredible — the speed and flexibility is crazy.

Built with RUST — do you have any idea what that means?!

This is not yet another painfully boring VS Code fork… this is serious business.

As far as performance goes there is no way in hell VS Code is going to have any chance against this.

Even the way the AI works is quite different from what we’re used to:

Look at how we easily added the precise context we needed — but just wait and see what we do next after this…

Now we send messages to the AI — but this is not like your regular Copilot or Cursor stuff…

We are not even making changes directly, we are creating context…

Context that we can then use with the inline AI — like telling to apply specific sections of what the AI said as changes:

Blazing fast:

This is an open-source IDE where performance, multiplayer, and agent workflows are first-class infrastructure.

Available everywhere you are.

Lightning-fast agentic support from the ground up

We’re already know about agentic coding — but what Zed gets right is where those agents live.

Instead of bouncing between your editor, a terminal, and three side tools, Zed gives you a native Agent Panel inside the editor. Agents can reason over your codebase, propose multi-file changes, and walk you through decisions in the same place you’re already editing.

Even better: Zed isn’t trying to lock you into one model or one workflow. It’s built to plug into the agent tools you already use. If you’re running terminal agents, Zed can talk to them directly. If you’re building toolchains around MCP servers, Zed already speaks that language. The editor becomes the hub where humans and agents actually collaborate instead of taking turns.

This is what makes it different. Not “AI in the editor,” but the editor designed around AI as a teammate.

Speed is the architecture, not the slogan

Zed was built in Rust with a hard focus on performance. It uses modern rendering and multi-core processing so the UI stays fluid even on big projects. No input lag. No “why did my editor freeze” moments. It feels lightweight in a way most editors stopped feeling years ago.

And when Zed came to Windows, they didn’t ship a half-baked port. They implemented a real rendering backend, deep WSL integration, and made Windows a first-class platform. That’s not checkbox compatibility—that’s engineering discipline.

If you care about flow state, this matters. The editor disappears. You move faster. You think more clearly. You ship more.

Serious business

Zed isn’t a weekend hacker tool. It’s backed by serious funding and built by people who’ve already shaped the modern editor ecosystem. That matters because it means long-term velocity: features land quickly, architectural bets actually get executed, and the product has a direction.

And that direction is clear: real-time collaboration between developers and agents inside the editor itself.

Not “connect to an AI.”
Not “paste code into a chat.”
But a shared workspace where humans and machines build together.

That’s the future Zed is aiming at—and it’s already usable today.

No-brainer pricing

Zed’s pricing is designed to get you in fast. The editor is free. The Pro plan is cheap, includes unlimited accepted AI edits, and gives you token credits for agent usage. There’s also a trial.

Translation: you don’t need a procurement meeting or a long debate. You can just install it and try your real workflow.

Which is exactly what they want you to do.

Definitely worth switching to

Zed is what happens when someone rebuilds the editor around modern realities instead of layering them on top of 2015-era assumptions.

You get:

  • A UI that feels instant.
  • Multiplayer and collaboration that are native.
  • Agents that live inside your workflow instead of beside it.
  • An architecture that respects performance and scale.
  • An open ecosystem that doesn’t lock you into one model, one vendor, or one style of “AI coding.”

If you’re already doing serious agent-driven development, Zed doesn’t ask you to change how you think—it finally gives you an environment that matches how you already work.

How to use Gemini CLI and blast ahead of 99% of developers

You’re missing out big time if you’re still ignoring this incredible tool.

There’s so much it can do for you — but many devs aren’t even using it anywhere close to it’s fullest potential.

If you’ve ever wished your terminal could think with you — plan, code, search, even interact with GitHub — that’s exactly what Gemini CLI does.

It’s Google’s command-line tool that brings Gemini right into your shell.

You type, it acts. You ask, it plans. And it works with all your favorite tools — including being powered with the same tech behind the incredible Gemini Code Assist:

It’s ChatGPT for your command line — but with more power under the hood.

A massive selling point has been the MCP servers — acting as overpowered plugins for Gemini CLI.

Hook it up to GitHub, a database, or your own API, and suddenly you’re talking to your tools in plain English. Want to open an issue, query a database, or run a script? Just ask.

How to get started fast

Just:

JavaScript
npm install -g @google/gemini-cli gemini

You’ll be asked to sign in with your Google account the first time. Pick a theme, authenticate:

And you’re in:

Talking to Gemini CLI

There are two ways to use it:

  • Interactive mode — just run gemini and chat away like you’re in a terminal-native chat app.
  • Non-interactive mode — pass your prompt as a flag, like gemini -p “Write a Python script to…”. Perfect for scripts or quick tasks.

Either way, Gemini CLI can do more than just text. It can:

  • Read and write files in your current directory.
  • Search the web.
  • Run shell commands (with your permission).

The secret sauce

Here’s where it gets exciting. MCP (Model Context Protocol) servers are like power-ups. Add one for GitHub and you can:

  • Clone a repo.
  • Create or comment on issues.
  • Push changes.

Add one for your database or your docs, and you can query data, summarize PDFs, or pull in reference material without leaving the CLI.

All you do is configure the server in your settings.json file. Gemini CLI then discovers the tools and lets you use them in natural language.

Give Gemini a memory with GEMINI.md

Create a GEMINI.md in your project and drop in your project’s “personality.” It can be as simple as:

Always respond in Markdown.
Plan before coding.
Use React and Tailwind for UI.

Use Yarn for NPM package installs

Next time you run Gemini CLI, it will follow those rules automatically. You can check what memory it’s using with /memory show.

Slash commands = Instant prompts

If you do the same thing a lot — like planning features or explaining code — you can create a custom slash command.

Make a small TOML file in .gemini/commands/ like this:

description = “Generate a plan for a new feature”
prompt = “Create a step-by-step plan for {{args}}”

Then in Gemini CLI just type:

/plan user authentication system

And boom — instant output.

Real-world examples

Here’s how people actually use Gemini CLI:

  • Code with context — ask it to plan, generate, or explain your codebase.
  • Automate file ops — have it sort your downloads, summarize PDFs, or extract data.
  • Work with GitHub — open issues, review PRs, push updates via natural language.
  • Query your data — connect a database MCP server and ask questions like a human.

Safety first

Gemini CLI can run shell commands and write files, but it always asks first. You can allow once, always, or deny. It’s like having a careful assistant who double-checks before doing anything risky.

Gemini CLI isn’t just another AI interface. It’s a workbench where you blend AI with your existing workflows. Instead of hopping between browser tabs, APIs, and terminals, you get one cohesive space where you talk and it acts.

Once you add MCP servers, GEMINI.md context, and slash commands, it starts to feel less like a tool and more like a teammate who lives in your terminal.

The myth of the AI coding bubble

Hundreds of millions of dollars.

Every single month.

And not from desperate investors or idle hobbyists — from real developers.

From serious companies that mean business.

AI coding, an industry inching closer to the $10 billion figure, by each passing year.

There are still devs on Reddit who think they are too good for AI — but the biggest tech giants now have AI writing major portions of their codebase — and they are not going back.

Google now has AI writing as much as 50% of their codebase.

We have 20 million GitHub Copilot users.

GitHub Copilot Enterprise customers increased 75% quarter over quarter as companies tailor Copilot to their own codebases.

And 90% of the Fortune 100 now use GitHub Copilot.

Microsoft Fiscal Year 2025 Fourth Quarter Earnings Conference Call

In this article, we’re going to go through why this wave of AI tooling is fundamentally different—starting with the one thing bubbles never have: real money.

The money is real (and it’s massive)

In a real bubble, companies burn billions with no path to profit. Yet AI coding tools are already printing money. Microsoft’s latest reports show that GitHub’s Annual Recurring Revenue (ARR) has crossed the $2 billion mark. GitHub Copilot alone accounts for over 40% of GitHub’s total revenue growth.

This isn’t a “pilot program” or a free beta; this is a product that millions of developers and thousands of companies are paying for because it delivers immediate, measurable value.

In fact, Satya Nadella recently noted that Copilot is already a larger business than the entirety of GitHub was when Microsoft acquired it in 2018.

“Just a toy”

The “it’s just a toy” argument dies when you look at who is actually using these tools. This isn’t just for hobbyists or “vibe coders” building weekend projects. According to Microsoft’s 2025 earnings data, over 90% of the Fortune 100 are now using GitHub Copilot.

When companies like Goldman Sachs, Ford, and P&G integrate a tool into their core engineering workflow, they aren’t chasing a trend—they’re chasing efficiency. They’ve done the math. If an engineer costing $200k a year becomes even 20% more productive, the $20-per-month subscription isn’t an expense; it’s the highest ROI investment the company has ever made.

StackOverflow

If you want to see the “bubble” argument fall apart, look at the casualties of this revolution. We are witnessing the Stack Overflow collapse. For a decade, the standard workflow was: Encounter bug → Google error → Find Stack Overflow thread → Copy/Paste.

That era is over. Recent data shows that Stack Overflow traffic has plummeted, with the rate of new questions dropping by a factor of 10. Why? Because developers no longer need to wait for a human to answer their question in three hours when an AI can solve it in three seconds. This shift in developer behavior is permanent. You don’t “un-learn” that level of speed.

The speed of human thought

The most profound reason this isn’t a bubble is philosophical but practical: AI increases the speed of human thought actualizing itself in software. Historically, the bottleneck of software was the “syntax tax.” You had a great idea, but you had to spend hours wrestling with boilerplate, configuration, and documentation. AI removes that friction. It allows a developer to stay in “the flow,” moving from concept to execution at the speed of thought.

We aren’t just writing code faster; we are thinking bigger. When the “cost” of trying a new feature or refactoring a messy codebase drops to near zero, innovation explodes.

The dot-com bubble burst because the internet wasn’t ready for the promises being made. In 2026, the AI coding revolution is different: the infrastructure is here, the revenue is proven, and the productivity gains are undeniable.

This isn’t a bubble. It’s the end of the “typing” era of software engineering and the beginning of the “architecting” era. If you’re waiting for the pop, you’re just going to get left behind.

10 must-have AI coding tools for software developers in 2026

Don’t tell me all you know how to use is Copilot or Cursor.

Oh wait, you’re even telling me that generating code is all you use AI for?

Honestly I’m shaking my head at how much potential you’ve been wasting as a software developer.

Don’t you know there’s so much more to AI coding than just sending prompts to an agent and waiting for code to drop from the sky.

1) v0

Do you see what I’m talking about?

There’s even AI for the stage before you even think of writing a single line of code.

Use v0 to turn all your amazing ideas into working designs with remarkable efficiency, especially on the frontend.

Why it matters

  • Generates complete, usable UI and app scaffolding
  • Can push code directly into real projects, not just prototypes
  • Strong alignment with modern frontend patterns and component libraries

Best for

  • Rapid UI development, internal tools, dashboards, and early product versions

2) Qodo

Qodo focuses on improving code quality rather than just generating more code.

Why it matters

  • Acts as an AI-powered reviewer across IDEs and pull requests
  • Encourages consistent standards and better engineering discipline
  • Scales review quality across teams and repositories

Best for

  • Teams that want fewer regressions and stronger code governance

3) Google Stitch

Stitch sits at the intersection of design and development, transforming ideas and visuals into usable UI and code.

Why it matters

  • Converts text prompts and images into UI layouts and frontend code
  • Bridges design and engineering workflows smoothly
  • Speeds up iteration between concepts and implementation

Best for

  • Frontend developers working closely with designers
  • Teams exploring multiple UI directions quickly

4) Multi-agent mode + Windsurf

Already a new era of AI coding.

Thanks to recent upgrades, IDEs like Windsurf now let you have multiple coding agents working on your codebase — at the same time.

You can add several features at once — and also fix bugs while you’re at it.

Your very own army of developers working together to build something incredible.

Why it matters

  • Handles multi-file and repo-wide changes naturally
  • Supports multiple agents working in parallel on the same codebase
  • Integrates planning, execution, and review inside the editor

Best for

  • Large refactors, new features, complex debugging, and coordinated development tasks

5) Google Antigravity (with artifacts)

By far the most standout feature of Google Antigravity.

Artifacts — it’s a new way coding agents communicate the process they used to make changes for you.

Screenshots, recordings, step-by-step checklists… artifacts let you know exactly what happened in the most intuitive way possible.

For example, look at the video Antigravity created when testing the web app I told it to create:

Antigravity focuses on agent orchestration and accountability rather than just code generation.

Why it matters

  • Lets you dispatch multiple agents for long-running or complex tasks
  • Produces artifacts like plans, diffs, logs, and walkthroughs for review
  • Emphasizes transparency and safety in autonomous workflows

Best for

  • Complex coding, multi-step fixes, and tasks that require traceability

6) Claude Code

Claude Code brings agentic coding directly into the terminal, fitting naturally into existing developer habits.

Why it matters

  • Works directly inside real repositories
  • Handles planning, implementation, and explanation in one flow
  • Ideal for developers who live in the CLI

Best for

  • Terminal-first workflows, scripting, and repo-wide reasoning

7) Gemini CLI

Gemini CLI is a terminal-based AI agent designed for structured problem solving and tool use.

Why it matters

  • Can reason through tasks step by step
  • Interacts with files, shell commands, and external tools
  • Extensible through custom integrations

Best for

  • Automating repetitive tasks
  • Exploring unfamiliar codebases quickly

8) Testim

Testim uses AI to make automated testing faster to create and easier to maintain.

Why it matters

  • Generates tests from high-level descriptions
  • Reduces flaky tests and maintenance overhead
  • Adapts better to UI changes than traditional test frameworks

Best for

  • Frontend-heavy applications
  • Teams struggling with brittle end-to-end tests

9) Snyk AI

Snyk AI brings security directly into the AI-driven development loop.

Why it matters

  • Automatically suggests fixes for vulnerabilities
  • Fits naturally into pull request and CI workflows
  • Helps teams keep up with security as development speeds increase

Best for

  • Organizations shipping quickly without compromising security

10) Mintlify

In 2026, documentation is part of the product. Mintlify makes it easier to keep docs current, readable, and useful.

Why it matters

  • Designed for modern developer documentation workflows
  • Supports fast authoring and clean presentation
  • Makes docs more usable for both humans and AI tools

Best for

  • API documentation, platform docs, and internal knowledge bases

AI isn’t here to type faster—it’s here to expand how you think, design, collaborate, review, ship, and own the entire lifecycle of what you build.

The real leverage comes when you let AI shape ideas, decisions, quality, speed, and trust—before, during, and after the code ever exists.