Tari Ibaba

Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.

Claude 4.5 comes with a revolutionary new tool that everyone missed

Claude Sonnet 4.5 totally stole the show so nobody is paying attention to this incredible tool that came with it.

This underrated tool could actually end up completely transforming the app ecosystem forever.

Meet Imagine with Claude — a revolutionary new tool for building software — apps that are ALIVE.

Apps that write their own code — are you with me??

First of all describe whatever you want and Claude builds it on the fly — there is no underlying static codebase anywhere.

And from then on — the software generates itself.

There is no compilation or building of pre-written code — the app generates more itself in real-time as you interact with it.

Instantly turn your wild ideas into working prototypes and beyond — Tweak in any way you want and shape the end result directly.

The key distinction here is that nothing is prewritten or predetermined.

When you click a button or enter text in the environment, Claude interprets your action and generates the necessary software components to adapt and respond instantly.

It’s software that evolves based on your in-the-moment needs, which is a significant departure from static, pre-packaged applications.

This is kind of magic that’s now possible thanks to this incredible new Claude Sonnet 4.5 model.

This new version of Claude is tuned for long, multi-step reasoning and tool use.

It literally coded non-stop for 30+ hours, which is just absolutely wild.

It’s blowing every competitor out of the water when it comes to Computer Use — autonomously performing tasks with your PC.

“Imagine with Claude” is a showcase for those abilities.

It’s a short-term experiment but its implications are huge for devs and designers and everyone else.

It points to a future of disposable, adaptive software.

Imagine needing a very specific, one-off tool for a task—instead of hunting for a pre-made solution or coding it yourself, the AI could instantly assemble a custom application that functions exactly how you need it to, right when you need it.

It essentially collapses the gap between idea and working prototype to mere seconds.

Product designers could use it to create complex, interactive user interfaces instantly, allowing for faster feedback and iteration.

Users could generate specialized software to manage personal data, analyze complex information, or automate niche tasks without ever touching a line of code.

It’ll be really exciting to see how much of an impact this has.

Claude Sonnet 4.5 is an absolute game changer

Wow this is HUGE.

Anthropic just shocked the world with their incredible new Claude 4.5 Sonnet model and people are going crazy.

You absolutely cannot ignore this.

They are loudly calling it the best coding model in the world and so many devs who are trying it out completely agree.

Can you believe this? 👇

30+ freaking hours of pure autonomous coding — nowhere on earth has this ever been seen before. Unbelievably unprecedented.

Even the Claude team themselves were shocked beyond belief — “resetting our expectations”😏…

I mean just see the sheer difference between Claude Sonnet 3.7 vs 4.0 vs 4.5 for yourself:

Claude 3.7:

Claude 4.0

And now the beast — 4.5:

Oh yes — Sonnet 4.5 is built from the ground up for real, sustained coding and agent workflows—the kind of long, messy jobs that used to be too complex for AI to handle without constant prompting babysitting.

When Claude 4 came out it was wowing us with 90+ minutes of uninterrupted coding — now what do we say about this.

It’s just such a huge huge leap.

You can just assign it a massive batch of features to implement and run off without a care in the world. It will do everything.

When you come back and see the amazing results you will be both awed and scared about the future of your job.

Look this model literally cranked out an 11,000-line app—complete with planning, implementation, and debugging—without being spoon-fed every step — any step:

11,000 lines!

Oh and then we still have these high-and-mighty individuals smugly looking down on anything to do with AI coding.

With 4.5 Claude has gotten even better at automating tasks on your PC — Computer Use.

Approximately 200% better — sometime I find pretty hard to dispute after seeing this incredible Chrome usage demo — it’s just too good:

Manage files, fill out forms, handle spreadsheets and email… the possibilities are endless. The reliability is gold.

For day-to-day coding, Sonnet 4.5 is the difference between asking an intern for edits and having a brilliant teammate who ships real features.

Instead of “change this one file”, you can now hand it a full GitHub issue — or a dozen —bug fixes, test expansion, documentation polish—and expect a proper pull request at the end.

It’s also showing stronger planning and comprehension, which matters when you’re touching dozens of files. If you’ve ever dreaded the chaos of updating SDKs across services or refactoring auth logic everywhere, you can see why this matters.

Edits are becoming more reliable too.

Early testers note fewer brittle changes and more coherent patches across entire repos. That’s exactly what makes it feel safe to delegate bigger jobs.

You don’t have to wait long to get your hands on it:

  • GitHub Copilot is already rolling out Claude 4.5 Sonnet to Pro, Business, and Enterprise plans.
  • AWS Bedrock offers it as the newest Anthropic option for coding and agent-heavy use cases.
  • Third-party tools like Augment Code have already made it their default model for collaborative development.

Whenever you try it, you will most certainly feel the effects of the massive upgrades.

Claude 4.5 Sonnet is a real turning point. We’ve really gone from autocomplete helpers to agents that can stick with a project for days and actually deliver working software.

This is surely going to make a mark.

GitHub’s new Copilot coding agent is absolutely incredible

GitHub finally released their Copilot coding agent to the world and it’s been completely insane.

This is in a completely different league from agentic tools like Windsurf or Cursor.

Forget autocomplete and forget vibe coding — this is a full-blown genius teammate.

This is an AI companion you can delegate real development work to — and it comes back with a pull request for you to review.

It’s actually collaborating with you like a real human.

No need for endless prompting — just assign it massive tasks — like a entire GitHub issue.

And that’s that — you don’t have to guide or micromanage it in anyway.

The agent:

  • Spins up an isolated GitHub Actions environment
  • Clones your repo
  • Builds and tests your code
  • Opens a draft pull request with its changes.

Make comments on the PR and it will instantly make any needed changes.

It’s built to handle real tasks — not just making edits here and there:

Fixing major bugs, implementing features, improving test coverage, updating documentation, and so much.not just making edits here and there.

But the biggest selling point here the asynchronous delegation.

You’re no longer chained to your IDE while an AI tool generates code. You can:

  • Offload routine work and keep coding on something else.
  • Get a PR-first workflow that matches how your team already ships software.
  • Run tasks in a clean CI-like environment, avoiding “works on my machine” issues.

Regular coding agents are amazing — but they live inside your editor. You’re chatting with them right there in your workspace.

They watch what you’re doing, keep track of your Problems panel, your edits, your clipboard — and they act instantly on your files. It’s like having a very attentive pair programmer who’s always sitting next to you.

But this Copilot agent doesn’t sit inside your IDE at all.

You hand it a task and it disappears into the cloud, does the work, and comes back later with all the results.

Instead of direct file edits you get a packaged, ready-to-review PR.

  • Copilot Coding Agent is best for: Bug fixes with clear repro steps, test coverage boosts, doc updates, dependency bumps, or any feature slice you want to run in the background and review later.
  • IDE Agents: Rapid prototyping, design-heavy changes, multi-file refactors, or anything where you want immediate feedback and full control.

Real examples:

  • Refactor an API call across dozens of files — it branches, updates, tests, and PRs.
  • Add a new endpoint with proper routing and unit tests.
  • Migrate a dependency with code updates across the repo.

The new Copilot coding agent makes async, repo-level development feel seamless.

If Windsurf and Cursor are about collaborating with AI inside your IDE, Copilot’s agent is about giving your AI its own seat at the table — one that files branches and PRs just like a real developer.

It’s an entirely new way to build software — and it’s here now.

These 5 MCP servers reduce AI code errors by 99% (perfect context)

AI coding assistants are amazing and powerful—until they start lying.

Like it just gets really frustrating when they hallucinate APIs or forget your project structure and break more than they fix.

And why does this happen?

Context.

They just don’t have enough context.

Context is everything for AI assistants. That’s why MCP is so important.

These MCP servers fix that. They ground your AI in the truth of your codebase—your files, libraries, memory, and decisions—so it stops guessing and starts delivering.

These five will change everything.

Context7 MCP Server

Context7 revolutionizes how AI models interact with library documentation—eliminating outdated references, hallucinated APIs, and unnecessary guesswork.

It sources up-to-date, version-specific docs and examples directly from upstream repositories — to ensure every answer reflects the exact environment you’re coding in.

Whether you’re building with React, managing rapidly evolving dependencies, or onboarding a new library, Context7 keeps your AI grounded in reality—not legacy docs.

It seamlessly integrates with tools like Cursor, VS Code, Claude, and Windsurf, and supports both manual and automatic invocation. With just a line in your prompt or an MCP rule, Context7 starts delivering live documentation, targeted to your exact project context.

Key features

  • On-the-fly documentation: Fetches exact docs and usage examples based on your installed library versions—no hallucinated syntax.
  • Seamless invocation: Auto-invokes via MCP client config or simple prompt cues like “use context7”.
  • Live from source: Pulls real-time content straight from upstream repositories and published docs.
  • Customizable resolution: Offers tools like resolve-library-id and get-library-docs to fine-tune lookups.
  • Wide compatibility: Works out-of-the-box with most major MCP clients across dozens of programming languages.

Errors it prevents

  • Calling deprecated or removed APIs
  • Using mismatched or outdated function signatures
  • Writing syntax that no longer applies to your version
  • Missing new required parameters or arguments
  • Failing to import updated module paths or packages

Powerful use cases

  • Projects built on fast-evolving frameworks like React, Angular, Next.js, etc.
  • Onboarding to unfamiliar libraries without constant tab switching
  • Working on teams where multiple versions of a library may be in use
  • Auditing legacy codebases for outdated API usage
  • Auto-generating code or tests with correct syntax and parameters for specific versions

Get Context7 MCP Server: LINK

Memory Bank MCP Server

The Memory Bank MCP server gives your AI assistant persistent memory across coding sessions and projects.

Instead of repeating the same explanations, code patterns, or architectural decisions, your AI retains context from past work—saving time and improving coherence. It’s built to work across multiple projects with strict isolation, type safety, and remote access, making it ideal for both solo and collaborative development.

Key features

  • Centralized memory service for multiple projects
  • Persistent storage across sessions and application restarts
  • Secure path traversal prevention and structure enforcement
  • Remote access via MCP clients like Claude, Cursor, and more
  • Type-safe read, write, and update operations
  • Project-specific memory isolation

Errors it prevents

  • Duplicate or redundant function creation
  • Inconsistent naming and architectural patterns
  • Repeated explanations of project structure or goals
  • Lost decisions, assumptions, and design constraints between sessions
  • Memory loss when restarting the AI or development environment

Powerful use cases

  • Long-term development of large or complex codebases
  • Teams working together on shared projects needing consistent context
  • Developers aiming to preserve and reuse design rationale across sessions
  • Projects with strict architecture or coding standards
  • Solo developers who want continuity and reduced friction when resuming work

Get Memory Bank MCP Server: LINK

Sequential Thinking MCP Server

Definitely one of the most important MCP servers out there anywhere.

It’s designed to guide AI models through complex problem-solving processes — it enables structured and stepwise reasoning that evolves as new insights emerge.

Instead of jumping to conclusions or producing linear output, this server helps models think in layers—making it ideal for open-ended planning, design, or analysis where the path forward isn’t immediately obvious.

Key features

  • Step-by-step thought sequences: Breaks down complex problems into numbered “thoughts,” enabling logical progression.
  • Reflective thinking and branching: Allows the model to revise earlier steps, fork into alternative reasoning paths, or return to prior stages.
  • Dynamic scope control: Adjusts the total number of reasoning steps as the model gains more understanding.
  • Clear structure and traceability: Maintains a full record of the reasoning chain, including revisions, branches, and summaries.
  • Hypothesis testing: Facilitates the generation, exploration, and validation of multiple potential solutions.

Errors it prevents

  • Premature conclusions due to lack of iteration
  • Hallucinated or shallow reasoning in complex tasks
  • Linear, single-path thinking in areas requiring exploration
  • Loss of context or rationale behind decisions in multi-step outputs

Powerful use cases

  • Planning and project breakdowns
  • Software architecture and design decisions
  • Analyzing ambiguous or evolving problems
  • Creative brainstorming and research direction setting
  • Any situation where the model needs to explore multiple options or reflect on its own logic

Once you install it, it becomes a powerful extension of your model’s cognitive abilities—giving you not just answers, but the thinking behind them.

Get Sequential Thinking MCP Server: LINK

Filesystem MCP Server

The Filesystem MCP server provides your AI with direct, accurate access to your local project’s structure and contents.

Instead of relying on guesses or hallucinated paths, your agent can read, write, and navigate files with precision—just like a developer would. This makes code generation, refactoring, and debugging dramatically more reliable.

No more broken imports, duplicate files, or mislocated code. With the Filesystem MCP your AI understands your actual workspace before making suggestions.

Key features

  • Read and write files programmatically
  • Create, list, and delete directories with precise control
  • Move and rename files or directories safely
  • Search files using pattern-matching queries
  • Retrieve file metadata and directory trees
  • Restrict all file access to pre-approved directories for security

Ideal scenarios

  • Managing project files during active development
  • Refactoring code across multiple directories
  • Searching for specific patterns or code smells at scale
  • Debugging with accurate file metadata
  • Maintaining structural consistency across large codebases

Get FileSystem MCP: LINK

GitMCP

AI assistants can hallucinate APIs, suggest outdated patterns, and sometimes overwrite code that was just written.

GitMCP solves this by making your AI assistant fully git-aware—enabling it to understand your repository’s history, branches, files, and contributor context in real time.

Whether you’re working solo or in a team, GitMCP acts as a live context bridge between your local development environment and your AI tools. Instead of generic guesses, your assistant makes informed suggestions based on the actual state of your repo.

GitMCP is available as a free, open-source MCP server, accessible via gitmcp.io/{owner}/{repo} or embedded directly into clients like Cursor, Claude Desktop, Windsurf, or any MCP-compatible tool. You can also self-host it for privacy or customization.

Key features

  • Full repository indexing with real-time context
  • Understands commit and branch history
  • Smart suggestions based on existing code and structure
  • Lightweight issue and contributor context integration
  • Live access to documentation and source via GitHub or GitHub Pages
  • No setup required for public repos—just add a URL and start coding

Errors it prevents

  • Code conflicts with recent commits
  • Suggestions that ignore your branching strategy
  • Overwriting teammates’ changes during collaboration
  • Breaking functionality due to missing context
  • AI confusion from outdated or hallucinated repo structure

Ideal scenarios

  • Collaborating in large teams with frequent commits
  • Working on feature branches that need context-specific suggestions
  • Reviewing and resolving code conflicts with full repo awareness
  • Structuring AI-driven workflows around GitHub issues
  • Performing large-scale refactors across multiple files and branches

Get GitMCP: LINK

Microsoft just made MCP even more insane

This will absolutely transform the MCP ecosystem forever.

Microsoft just released a new feature that makes creating MCP servers easier than ever before.

Now with Logic Apps as MCP servers — you can easily extend any AI agent with extra data and context without writing even a single line of code.

String together thousands of tools and give any LLM access to all the data flowing through them.

From databases and APIs to Slack, GitHub, Salesforce, you name it. Thousands of connectors are already there.

Now you can plug a whole new world of prebuilt integrations straight into your AI workflows.

Until now, hooking an LLM or agent up to a real-world workflow was painful. You had to write API clients, handle OAuth tokens, orchestrate multiple steps… it was a lot.

With Logic Apps as MCP servers, all that heavy lifting is already done. Your agent can call one MCP tool, and under the hood Logic Apps will ping APIs, transform data, or trigger notifications across your services.

You can wire up a Logic App that posts to social media, updates a database, or sends you alerts, and then call it from your AI app. No new server, no SDK headaches.

Microsoft’s MCP implementation even supports streaming HTTP and (with some setup) Server-Sent Events. That means your agent can get partial results in real time as Logic Apps run their workflows — great for progress updates or long-running tasks.

Because it’s running inside Azure, you get enterprise-grade authentication, networking, and monitoring. Even if you’re small now, this matters when you scale or if you’re dealing with sensitive data.

What can you do right now

  • Build a Logic App that starts with an HTTP trigger and ends with a Response action.
  • Flip on the MCP endpoint option in your Logic App’s settings.
  • Register your MCP server in Azure API Center so agents can discover it.
  • Point your AI agent to your new MCP endpoint and start calling it like any other tool.

Boom — your no-code workflow is now an AI-callable tool.

Some ideas to get you started

  • Personal Dashboard: Pull data from weather, GitHub, and your to-do list, and serve it to your AI bot in one call.
  • Social Blast: Draft tweets or LinkedIn posts with AI, then call a Logic App MCP server to publish them automatically.
  • File Pipeline: Resize images, upload to storage, and notify a channel — all triggered by a single MCP call.
  • Notifications & Alerts: Have your AI assistant call a Logic App to send you Slack, Teams, or SMS updates.

The bigger picture

This move is a major milestone because it connects two worlds:

  • The agent/tooling world (MCP, AI assistants, LLMs)
  • The workflow/integration world (Logic Apps, connectors, automations)

Until now these worlds were separate. Now they’re basically plug-and-play.

Microsoft is betting that MCP will be the standard for AI agents the way HTTP became the standard for the web.

By making Logic Apps MCP-native, they’re giving you a shortcut to a huge ecosystem of integrations and enterprise workflows.

How to use Gemini CLI and blast ahead of 99% of developers

You’re missing out big time if you’re still ignoring this incredible tool.

There’s so much it can do for you — but many devs aren’t even using it anywhere close to it’s fullest potential.

If you’ve ever wished your terminal could think with you — plan, code, search, even interact with GitHub — that’s exactly what Gemini CLI does.

It’s Google’s command-line tool that brings Gemini right into your shell.

You type, it acts. You ask, it plans. And it works with all your favorite tools — including being powered with the same tech behind the incredible Gemini Code Assist:

It’s ChatGPT for your command line — but with more power under the hood.

A massive selling point has been the MCP servers — acting as overpowered plugins for Gemini CLI.

Hook it up to GitHub, a database, or your own API, and suddenly you’re talking to your tools in plain English. Want to open an issue, query a database, or run a script? Just ask.

How to get started fast

Just:

JavaScript
npm install -g @google/gemini-cli gemini

You’ll be asked to sign in with your Google account the first time. Pick a theme, authenticate:

And you’re in:

Talking to Gemini CLI

There are two ways to use it:

  • Interactive mode — just run gemini and chat away like you’re in a terminal-native chat app.
  • Non-interactive mode — pass your prompt as a flag, like gemini -p “Write a Python script to…”. Perfect for scripts or quick tasks.

Either way, Gemini CLI can do more than just text. It can:

  • Read and write files in your current directory.
  • Search the web.
  • Run shell commands (with your permission).

The secret sauce

Here’s where it gets exciting. MCP (Model Context Protocol) servers are like power-ups. Add one for GitHub and you can:

  • Clone a repo.
  • Create or comment on issues.
  • Push changes.

Add one for your database or your docs, and you can query data, summarize PDFs, or pull in reference material without leaving the CLI.

All you do is configure the server in your settings.json file. Gemini CLI then discovers the tools and lets you use them in natural language.

Give Gemini a memory with GEMINI.md

Create a GEMINI.md in your project and drop in your project’s “personality.” It can be as simple as:

Always respond in Markdown.
Plan before coding.
Use React and Tailwind for UI.

Use Yarn for NPM package installs

Next time you run Gemini CLI, it will follow those rules automatically. You can check what memory it’s using with /memory show.

Slash commands = Instant prompts

If you do the same thing a lot — like planning features or explaining code — you can create a custom slash command.

Make a small TOML file in .gemini/commands/ like this:

description = “Generate a plan for a new feature”
prompt = “Create a step-by-step plan for {{args}}”

Then in Gemini CLI just type:

/plan user authentication system

And boom — instant output.

Real-world examples

Here’s how people actually use Gemini CLI:

  • Code with context — ask it to plan, generate, or explain your codebase.
  • Automate file ops — have it sort your downloads, summarize PDFs, or extract data.
  • Work with GitHub — open issues, review PRs, push updates via natural language.
  • Query your data — connect a database MCP server and ask questions like a human.

Safety first

Gemini CLI can run shell commands and write files, but it always asks first. You can allow once, always, or deny. It’s like having a careful assistant who double-checks before doing anything risky.

Gemini CLI isn’t just another AI interface. It’s a workbench where you blend AI with your existing workflows. Instead of hopping between browser tabs, APIs, and terminals, you get one cohesive space where you talk and it acts.

Once you add MCP servers, GEMINI.md context, and slash commands, it starts to feel less like a tool and more like a teammate who lives in your terminal.

This secret new coding model just shocked the entire world

The crazy thing is nobody knows exactly who’s behind it.

But it’s absolutely huge.

A brand new coding model built for the way devs actually work today — agents, terminals, long sessions, and even images.

Open Cline or Windsurf or Cursor and you will see the new option hiding in your model picker:

code-supernova.

I previously thought it was a new native Windsurf model but no — this is a secret partner working with all the major IDEs.

Think of code-supernova as a tireless senior engineer who never forgets context. Its context window clocks in around 200,000 tokens—large enough to swallow your repo, your logs, your test output, your onboarding docs, and still have room for much more.

No more prompt-chopping. Your agent can hold the entire picture in its head and keep pushing forward.

And it’s multimodal. Feed it screenshots of a broken UI, an architecture diagram, a flowchart you sketched on paper, or a whiteboard photo from last night’s brainstorm.

It turns visual signals into code moves.

That unlocks a new workflow: design → diagram → implementation, without the translation overhead. Visual debugging becomes real. “Here’s the stack trace and the screenshot—fix it” becomes a single instruction, not a meeting.

Where code-supernova truly flexes is the agentic work.

This model is built for long-running sessions where your IDE agent plans, edits, runs tools, reads terminal output, evaluates results, and loops until the task is done. A true coding partner.

It reasons across files, updates the right modules in the right order, and keeps the terminal state in mind as it iterates. Refactors that used to take an afternoon shrink to a coffee break.

Cross-cutting changes stop feeling risky because the agent isn’t flying blind—it remembers every decision it just made.

The best part is it’s free to try right now in Cline, Kilo Code, Cursor, and Windsurf.

No new account. No platform migration. Just pick “code-supernova,” hand it a mission and watch your IDE light up.

It’s an alpha from a stealth lab, which makes this the perfect moment to build unfair advantage: learn its strengths, wire it into your flow, and ship faster while everyone else is still reading about it.

Privacy-conscious about sending your data to people you don’t know?

Flip through your IDE’s data-sharing settings and tune them how you like.

code-supernova is a real force multiplier: it’s built for how we really code—context-heavy, agent-driven, multimodal, and terminal-native.

Your model sees the whole board, think multiple moves ahead, and execute without losing the thread.

If you live inside an IDE agent and run long sessions, this is the model to beat.

Fire it up while it’s still free, point it at the messy, sprawling, real-world work on your plate, and let it run.

This coding puzzle is incredible

This is such a tricky coding puzzle, you won’t believe the algorithm I had to make to solve it.

JavaScript
function addToLeaderboard(player, ranked) { } // Can you write the code for the addToLeaderboard() function? console.log(addToLeaderboard([80, 60, 60, 10], [90])); // [1] console.log(addToLeaderboard([80, 60, 60, 10], [90, 100])); // [2, 1] console.log( addToLeaderboard([80, 60, 60, 10], [90, 60, 5]) // [1, 3, 5] );

So first let’s understand what the puzzle is about.

You have a function that takes two inputs:

  • a player array — a list of player scores
  • a ranked array — a list of scores already on the leaderboard

So each of the scores in ranked are ranked.

For example:

  • Player scores: 80, 60, 10
  • Resulting ranking: 1, 2, 3 — respectively

Ignoring how bad you have to be get 10 in any sort of game where others are scoring 80…

What if the players what if the players are tied? For example:

  • 80, 60, 60, 10

Result ranking:

  • 1, 2, 2, 3

You give the same rank to the tied players, and then the next players get the rank after that.

So what does the function do? It adds the new batch of scores in player to the leaderboard in ranked — which ranks them

The function will return an array of the new ranks of these new scores that just got added.

For example if the player array is [90] — just one item, it will return [1] — the scores are now 90, 80, 60, 10.

So if the array is [90, 60, 5], what will it return?

It will be [1, 3, 5] — NOT [1, 2, 4] or [1, 2, 5] like you might mistakenly guess.

So this is where we are:

JavaScript
function addToLeaderboard(ranked, players) { } console.log(addToLeaderboard([80, 60, 60, 10], [90])); // [1] console.log(addToLeaderboard([80, 60, 60, 10], [90, 100])); // [2, 1] console.log( addToLeaderboard([80, 60, 60, 10], [90, 60, 5]) // [1, 3, 5] );

So how we go about it? How do we get the ranks of the newly added scores.

I can see that data from both arrays are being combine into each other to give the updated leaderboard data.

You can also see that this is a leaderboard of scores — which clearly means sorting is going on…

Can you see where this is going?

Initially, I thought it was going to be a highly sophisticated algorithm.

But after this simple thought exercise, what we need to do is so obvious.

  1. Merge the player and ranked arrays.
  2. Sort the result of the new array
  3. Get the position of each score in player from the sorted array

Let’s merge the arrays:

JavaScript
// ranked: 80, 60, 10 // players: 90, 60, 5 const merged = [...ranked, ...players]; // merged: 80, 60, 10, 90, 60, 5

Now sorting — since the highest score comes first, we need to sort it by descending order:

JavaScript
const sorted = merged.sort((a, b) => b - a); // sorted: 80

How about getting the positions for each score from player in the new leaderboard?

Of course we have the indexOf() method.

But if you just used this you’d be way off the mark… (get it?)

indexOf returns the zero-based index — first element has index of 0, second of 1, and so on…

JavaScript
const rank = sorted.indexOf(players[0]); // 0

To get the leaderboard ranks, we need the one-based index — first element should give 1.

So obviously we fix easily with a +1.

JavaScript
const rank = sorted.indexOf(players[0]) + 1; // 1

So what’s left now is doing this for each item in the players array — a perfect use case for… ?

map():

JavaScript
const ranks = players.map( (score) => sorted.indexOf(score) + 1 );

So now let’s test the full function:

JavaScript
function addToLeaderboard(ranked, players) { const merged = [...ranked, ...players]; const sorted = merged.sort((a, b) => b - a); const ranks = players.map( (score) => sorted.indexOf(score) + 1 ); return ranks; } console.log(addToLeaderboard([80, 60, 60, 10], [90])); // [1] console.log(addToLeaderboard([80, 60, 60, 10], [90, 100])); // [2, 1] console.log( addToLeaderboard([80, 60, 60, 10], [90, 60, 5]) // [1, 3, 5] );

If it works we should get the results in the comments:

No! What happened with the last test case?

It’s giving [1, 3, 7] instead of [1, 3, 5]?

It seems like the last score (5) is being pushed down by 2 from 5 to 7?

What do you think could be causing this?

Yes! The multiple 60‘s in the array is affecting the result that indexOf gives.

To fix this we need to remove the duplicates after the merge.

A perfect job for Set():

JavaScript
function addToLeaderboard(ranked, players) { const merged = [...ranked, ...players]; const unique = [...new Set(merged)]; const sorted = unique.sort((a, b) => b - a); const ranks = players.map( (score) => sorted.indexOf(score) + 1 ); return ranks; }

Now let’s try that again:

Perfect.

The absolute best AI coding extensions for VS Code in 2025

AI tools inside VS Code have gone way way beyond simple autocompletion.

Today you can chat with an assistant, get multi-file edits, generate tests, and even run commands straight from natural language prompts.

These are the very best AI coding extensions that will transform how you develop software forever.

1. Gemini Code Assist

Google’s Gemini Code Assist brings the Gemini model into VS Code. It stands out for its documentation awareness and Google ecosystem ties.

Why it’s great:

  • Answers come with citations so you can see which docs were referenced.
  • It can do code reviews, generate unit tests, and help debug.
  • Works across app code and infrastructure (think Terraform, gcloud CLI, etc.).

Great for: Anyone working heavily in Google Cloud, Firebase, or Android, or who values transparent, sourced answers.

2. GitHub Copilot

GitHub Copilot is the “classic” AI coding assistant — but it’s evolved far beyond just inline suggestions. With the main Copilot extension and Copilot Chat, you get a fully integrated agent inside VS Code.

Just see how easy it is:

Why it’s great:

  • Agent and Edit modes let Copilot actually implement tasks across your files and iterate until the code works.
  • Next Edit Suggestions predict your next likely change and can propose it automatically.
  • Workspace-aware chat lets you ask questions about your codebase, apply edits inline, or run slash commands for refactoring.

Great for: Developers who want deep VS Code integration and a polished, “just works” AI experience.

3. Tabnine

Tabnine is all about privacy, control, and customization. It offers a fast AI coding experience without sending your proprietary code to third parties.

Look how we use it rapidly create tests for our code:

Effortless code replacement:

Why it’s great:

  • Privacy first: can run self-hosted or in your VPC, and it doesn’t train on your code.
  • Custom models: enterprises can train Tabnine on their own codebases.
  • Versatile assistant: generates, explains, and refactors code and tests across many languages.

Great for: Teams with strict data policies or anyone who wants an AI coding assistant they can fully control.

4. Amazon Q for VS Code

Amazon Q is AWS’s take on an “agentic” coding assistant — it can read files, generate diffs, and run commands all from natural language prompts.

Why it’s great:

  • Multi-step agent mode writes code, docs, and tests while updating you on its progress.
  • MCP support means you can plug in extra tools and context to extend what it can do.
  • Inline chat and suggestions feel native to VS Code.

Great for: AWS developers who want more than autocomplete — a true assistant that can execute tasks in your environment.

5. Windsurf Plugin (Codeium)

Some of you don’t know that Windsurf was actually Codeium before — originally just a nice VS Code extension — before becoming a full-fledged beast of an IDE.

After Windsurf came out they renamed it to this — Windsurf Plugin.

The Windsurf Plugin delivers lightning-fast completions and chat inside VS Code, plus a generous free tier.

Why it’s great:

  • Unlimited free single- and multi-line completions right out of the box.
  • Integrated chat for refactoring, explaining, or translating code.
  • Links with the standalone Windsurf Editor for even more features.

Great for: Anyone who wants a fast, no-hassle AI coding experience.

6. Blackbox AI

Blackbox AI is one of the most popular AI coding agents in the Marketplace, designed to keep you in flow while it helps with code, docs, and debugging.

Why it’s great:

  • Agent-style workflow: run commands, select files, switch models, and even connect to MCP servers for extra tools.
  • Real-time code assistance: completions, documentation lookups, and debugging suggestions that feel native to VS Code.
  • Understands your project: conversations and edits can reference the broader codebase, not just a single file.
  • Quick start: install and start using it without a complicated setup.

Great for: Developers who want a free, quick-to-try AI agent inside VS Code that can go beyond autocomplete and interact with their workspace.

How to choose?

Best by area:

  • Deep integration & agents: GitHub Copilot, Amazon Q, or Blackbox AI.
  • Doc-aware answers with citations: Gemini Code Assist.
  • Strict privacy and custom models: Tabnine.
  • Fast and free: Windsurf Plugin.

Also consider your stack and your priorities.

  • On AWS? Amazon Q makes sense.
  • All-in on Google Cloud? Gemini is your friend.
  • Need privacy? Tabnine is your best bet.
  • Want the smoothest VS Code integration? Copilot.
  • Want to try AI coding with no cost barrier? Windsurf Plugin.

If you’re not sure where to start, pick one, try it for a real project, and see how it fits your workflow. The best AI coding tool is the one that actually helps you ship code faster — without getting in your way.

Learn More at Live! 360 Tech Con

Interested in building secure, high-quality code without slowing down your workflow? At Live! 360 Tech Con, November 16–21, 2025, in Orlando, FL, you’ll gain practical strategies for modern development across six co-located conferences. From software architecture and DevOps to AI, cloud, and security, you’ll find sessions designed to help you write better, safer code.

Special Offer: Save $500 off standard pricing with code CODING.

This AI dev tool from Vercel is monstrously good

I made a huge mistake ignoring this unbelievable tool for so long.

Verce’s v0 is completely insane… This is a UI generator on crazy crazy steroids.

Imagine create a fully functioning web app from nothing but vague ideas and mockups and screenshots.

Not even a single atom of code has to be written down anywhere.

Traditional UI generators stop at creating dead, boring code snippets from UI.

v0 acts like an agent: it plans steps, fetches data, inspects pages, fixes missing dependencies or runtime errors, and can even hook into GitHub and deploy straight to Vercel.

You see everything it’s doing — you can pause or tweak anything at any time.

One of the best selling points is how anyone can share any design with anyone.

There are a ridiculous number of freely available templates from the community that anyone can use and modify.

It’s like GitHub for web apps and UI designs.

Look how I just loaded this project from the community:

Then I asked it to make the theme a light theme — so damn effortless…

And that’s really just that — I can publish immediately — it builds just like any other Vercel project:

The result: an actual live site we can work with:

It works across popular stacks — React, Vue, Svelte, or plain HTML+CSS — and gives you three powerful views:

  • Live preview to see your app instantly
  • Code view for full control
  • Design Mode for visual tweaks without touching code

By default, v0 uses shadcn/ui with Tailwind CSS, but you can also plug in your own design system to keep everything on-brand.

Using v0 is simple and fast:

  1. Describe your app in text or upload screenshots/Figma files.
  2. Iterate visually with Design Mode, adjusting typography, spacing, or colors without spending credits.
  3. Connect real services like databases, APIs, or AI providers.
  4. Deploy with a single click to Vercel — add your own domain if you like.

Because GitHub is built in, you can link a repo, choose a branch, and let v0 sync changes both ways.

What you can build with it

v0 is great for:

  • Turning mockups into production-ready UIs
  • Spinning up full-stack apps with authentication and a database
  • Creating dashboards, landing pages, and internal tools
  • Adding AI features by plugging in your own OpenAI, Groq, or other API keys

Essentially, it’s a fast lane for designers, product managers, and developers who want to get to a real, working app without months of hand-coding.

Integrations that matter

v0 connects to popular back-end and AI tools out of the box:

  • Databases like Neon, Supabase, Upstash, and Vercel Blob
  • AI providers including OpenAI, Groq, fal, and more
  • UI components via shadcn’s “Open in v0” buttons

For teams building their own workflows, Vercel also offers a v0 Platform API to programmatically tap into its text-to-app engine.

Pricing and recent changes

In 2025, Vercel shifted v0 to a credit-based pricing model with monthly credits on Free, Premium, and Team plans. Purchased credits on paid plans last a year.

It also moved from v0.dev to v0.app to signal its new focus on being an agentic app builder — one that can research, reason, debug, and plan, not just generate code.

Security and reality check

Because it’s powerful and fast, v0 has also been misused by bad actors to clone phishing sites. That’s not Vercel’s intention — it’s a reminder to always review and test generated code, just as you would a junior developer’s pull request.

When v0 shines

v0 is ideal if you:

  • Need to ship MVPs, landing pages, or internal tools quickly
  • Already work with Tailwind/shadcn or have a design system in place
  • Want to iterate fast with an AI assistant that can fix its own errors

You’ll still want to review for security, performance, and business logic. But for speed, flexibility, and polish, it’s one of the best ways to get an app live today.

Getting started

Sign up at v0.app, describe your project, iterate visually, hook up your back end, and deploy. In minutes, you can go from idea to a working app.

Vercel’s v0 isn’t just another “AI website builder.” It’s a full-stack, agentic assistant that understands modern web development and helps you actually ship.

This is definitely worth trying if you’re looking for a fast and flexible way to go from concept to production.