Tari Ibaba is a software developer with years of experience building websites and apps. He has written extensively on a wide range of programming topics and has created dozens of apps and open-source libraries.
I can’t believe VS Code still doesn’t have this incredible AI feature in GitHub Copilot.
It’s so clear that they’re just not the most innovative IDE in town anymore.
The powerful Tab-to-jump has already become a standard in major rivals like Windsurf and Cursor — yet VS Code is still largely stuck in line completions.
Windsurf and Cursor have had bonafide coding agents for weeks now with several updates — yet GitHub’s agent hasn’t even gone past public preview lol.
Line completions are just so old school now… why are they moving so slow?
With tab-to-jump the IDE no longer thinks merely at the level of the current line but the entire file and codebase.
It’s no longer just trying to predict our next character, but now our next action in the code.
It’s becoming more and more of a full-fledged autonomous AI companion.
Things like refactoring just keep getting easier — like recently one time I added an extra param to a method, and I instantly start Tab-to-jump suggestions which took me to all the method calls to add the new argument using the most logical variable.
It automatically analyzed the entire code base to understand that this is what I’m most likely to do.
And after I jumped, it immediately suggested regular inline completions for me to accept.
Things like this just move much faster now, I’m really feeling it.
If you’re still pretending like these AI tools are not going to drastically speed up your dev process, you’re just deceiving yourself.
Mhmm and I wonder the prompt they could have given to the model to make this tab-to-jump stuff work.
I know for inline code completions it’s pretty straightforward — something like this.
But for integrating something like tab-to-jump it’ll definitely be more complex.
The model would probably have to output two different modes of result — like “completion” mode and “jump” mode.
So maybe something like this 👇
Here I added a new param to the sum function, so the next logical thing is to update the body to include it in result.
And here I get a jump result to the 2nd line and 13th column to make the edit just after the b:
On second thought I might not even need separate modes — I could just use completion but always give a line and column, even if it’s where the cursor is:
In any case it’s a highly handy feature that every IDE needs to implement right now to be taken seriously in life.
The bar for the AI features expected of IDE will only continue to rise. Back then code completions were novel and amazing — now you must be joking if you don’t have even that.
VS Code and GitHub Copilot need to buckle up and stop letting GitHub forks put them to shame.
The new Windsurf IDE just got a massive new upgrade to finally silence the AI doubters.
The Wave 5 upgrade is here and it’s packed with several game changing AI features to make coding easier and faster.
So ever since Windsurf first came out, they’ve been releasing massive upgrades in something they call Waves.
Like in Wave 3 they brought tis intelligent new tab-to-jump feature that I’ve really been enjoying so far,
Just like you get inline code suggestions that you accept with Tab, now you get suggestions to jump to the place in the file where you’re most likely to continue making changes.
So like now it’s not just predicting your next line — it’s predicting your overall intent in the file and codebase at large. The context and ability grew way beyond just the code surrounding your cursor.
And things have been no different with Wave 4 and now 5 — huge huge context upgrades…
With Wave 5 Windsurf now uses your clipboard as context for code completions.
Just look at this:
It’s incredible.
We literally copied pseudocode and once we went back to start typing in the editor — it saw the Python file and function we were creating — it started making suggestions to implement the function.
And it would intelligently do the exact same thing for any other language.
It’ll be really handy when copying answers from ChatGPT or StackOverflow that you can’t just paste to drop into your codebase.
Instead of having to paste and format/edit, Windsurf automatically integrates what you copied into your codebase.
But Wave 5 goes way beyond basic copy and paste.
With this new update Windsurf now uses your conversation history with Cascade chat as context for all your code completions.
Look at this.
Cascade didn’t even give us any code snippets — but still the Tab feature understood the chat message to realize what you’re most likely to do next in the codebase.
So you see, we’re getting even more ways to edit our code from chatting with Cascade — depending on the use case.
You can make Windsurf edit the code directly with Cascade Write mode — auto-pilot vibe coding.
You can chat with Cascade and get snippets of changes that can be made — which you can accept one at a time, based on what you think is best.
Or now, you can use Cascade to get guidelines on what you need to do, then write the code yourself for fine-grained control — using insanely sophisticated Tab completions along the way to move faster than ever.
I know some of you still don’t want trust the auto-edits from vibe coding tools — this update is certainly for you.
Which ever level of control you want, Windsurf can handle it.
And it still doesn’t stop there with Wave 5 — your terminal isn’t getting left behind…
Context from the IDE terminal include all your past commands:
These AI IDEs just keep getting better and better.
The gap between between devs who use these tools and those acting like they’re irrelevant just keeps growing wider everyday.
Now, even more updates to help you spent less time on grunt coding and develop with lightning speed — while still having precise control of everything when you choose too.
Now with Wave 5 — a smarter coding companion with even more context to better predict thoughts and save you from lifting much of a finger — apart from the one to press Tab.
It’s all about counting the “boomerangs” in a list. Can you do it?
Maybe you first want to do what the hell a boomerang is right?
Okay so it’s like, any section of the list with 3 numbers where the first and last digits repeat: like[1, 2, 1]:
So how many boomerangs can you see in [4, 9, 4, 6, 3, 8, 3, 7, 5, -5, 5]?
…
It’s 3
[4, 9, 4]
[3, 8, 3]
[5, -5, 5]
So the puzzle is to write an algorithm to find this pattern throughout the list.
But here’s where it gets tricky: The algorithm should also count overlapping boomerangs — like in [1, 5, 1, 5, 1, 5, 1] we have FIVE boomerangs — not two — I even thought it was three at first — no it’s freaking five.
So how do we go about this?
My first instinct is to loop through the list and and then when we get up to 3 items we can do the calculation.
It’s one of those situations where we need to keep track of previous values in the loop at every step of the loop.
So in every loop we’ll have one value for the current item, the previous items, and the one before that.
How do we keep track of all the items?
For the current items it’s super easy of course — it’s just the current iter variable:
JavaScriptCopied!
countBoomerangs([1, 2, 1, 0, 3, 4, 3]);
function countBoomerangs(arr) {
let curr;
for (item of arr) {
curr = item;
console.log(`curr: ${curr}`);
}
}
What about keeping tracking of the previous variable?
We can’t use the current iter variable anymore — we need to store the iter variable from one iteration just before the next one starts.
So what we do — store the value of curr just before the loop ends:
JavaScriptCopied!
countBoomerangs([1, 2, 1, 0, 3, 4, 3]);
function countBoomerangs(arr) {
let curr;
let prev;
for (item of arr) {
curr = item;
console.log(`curr: ${curr}, prev: ${prev}`);
prev = curr;
}
}
Or we can actually do it at the point just before the next loop starts:
JavaScriptCopied!
countBoomerangs([1, 2, 1, 0, 3, 4, 3]);
function countBoomerangs(arr) {
let curr;
let prev;
for (item of arr) {
prev = curr;
curr = item;
console.log(`curr: ${curr}, prev: ${prev}`);
}
}
It has to be these two points — either just before the loop starts or just before it ends.
Just before the loop starts really meaning before we update curr to the current variable.
Just before it ends really meaning after we finished using the stale prev variable — before we update it — to be stale again the next iteration — do you get? “Stale” because it’s always going to be out of date, it’s supposed to be.
And what about the previous one before this previous one relative to the current variable? The previous previous variable?
We use the exact same logic — although there’s something you’re going to need to watch out for…
So to track the previous previous variable I need to set prev2 to the stale stale iter variable — if you know what I mean, ha ha…
So like before either at the beginning of the loop like this:
JavaScriptCopied!
countBoomerangs([1, 2, 1, 0, 3, 4, 3]);
function countBoomerangs(arr) {
let curr;
let prev;
let prev2;
for (item of arr) {
// Stale stale iter variable (stale of the stale)
prev2 = prev;
// Just stale (one stale)
prev = curr;
// Before update to current (non-stale)
curr = item;
console.log(`curr: ${curr}, prev: ${prev}, prev2: ${prev}`);
}
}
But what about at the end?
We have to maintain the order: prev2->prev->curr — we always need to update prev2‘s to the prev‘s stale value before update prev to curr‘s stale value.
JavaScriptCopied!
function countBoomerangs(arr) {
let curr;
let prev;
let prev2;
for (item of arr) {
curr = item;
// Stale stale iter variable (stale of the stale)
console.log(`curr: ${curr}, prev: ${prev}, prev2: ${prev}`);
// Before update to current (non-stale)
prev2 = prev;
// Just stale (one stale)
prev = curr;
}
}
So finally we’ve been able to track all these 3 variables to check for the boomerang.
Now all that’s left is to make the check in every loop and update a count — pretty straightforward stuff:
JavaScriptCopied!
function countBoomerangs(arr) {
// ...
let count = 0;
for (item of arr) {
prev2 = prev;
prev = curr;
curr = item;
if (prev2 === curr && prev !== curr) {
count++;
}
}
return count;
}
I noticed we could have just use a traditional for loop and use the iter counter get the sub-list up to 3 numbers back:
Either this:
JavaScriptCopied!
function countBoomerangs2(arr) {
for (let i = 0; i < arr.length; i++) {
const prev2 = arr[i - 2];
const prev = arr[i - 1];
const curr = arr[i];
console.log(
`curr: ${curr}, prev: ${prev}, prev2: ${prev}`
);
}
}
or this:
JavaScriptCopied!
function countBoomerangs2(arr) {
for (let i = 0; i < arr.length; i++) {
const [prev2, prev, curr] = arr.slice(i - 2, i + 1);
console.log(`curr: ${curr}, prev: ${prev}, prev2: ${prev}`);
}
}
I’ll add a check for when the iter counter goes up to 3 numbers to avoid the undefined stuff. This would definitely throw a nasty out-of-range error in stricter language like C#.
JavaScriptCopied!
function countBoomerangs2(arr) {
for (let i = 0; i < arr.length; i++) {
// ✅ Check length
if (arr.length > 2) {
const [prev2, prev, curr] = arr.slice(i - 2, i + 1);
console.log(
}
}
}
And the comparison will be just as before:
JavaScriptCopied!
function countBoomerangs2(arr) {
let count = 0;
for (let i = 0; i < arr.length; i++) {
if (arr.length > 2) {
const [prev2, prev, curr] = arr.slice(i - 2, i + 1);
if (prev2 === curr && prev !== curr) {
count++;
}
}
}
return count;
}
The best part about this alternative is it lets us set up a beautiful one-liner solution with functional programming constructs like JavaScript’s reduce().
This new IDE from Google is seriously revolutionary.
Project IDX is in a completely different league from competitors like VS Code or Cursor or whatever.
It’s a modern IDE in the cloud — packed with AI features.
I was not surprised to see this sort of thing coming from Google — with their deep-seated hatred for local desktop apps.
Loading your projects from GitHub and then install dependencies instantly without any local downloading.
It’s a serious game changer if your local-based IDE is hoarding all your resources in your normie PC — like VS Code does a lot.
Tari for the last time, VS Code is a code editor and not an IDE!! Learn the difference for goodness sake!!!
Ah yes, a code editor that eats up several gigabytes of RAM and gobbles up all your battery life that your OS itself starts complaining bitterly.
Such a lightweight code editor.
Most certainly not an IDE.
Anyway, so I could really see the difference between VS Code and IDX in my past PC.
Like when it came to indexing files in a big project, to enable language features like Intellisense & variable rename.
VS Code would sometimes take forever to load and it might not even load fully — bug?
I would have to reload the window multiple times for the editor to finally get it right.
But with IDX it was amazing. The difference was clear.
Once I loaded the project I had the all the features ready. Everything happened instantly.
Because all the processing was no longer happening in a weak everyday PC but now in a massively powerful data center with unbelievable speeds.
Having a remote server take care of the heavy lifting drastically cuts down on the work your local PC has to handle.
Including for project debugging that take lots of resources — like Android Emulator tests.
The Android Studio Emulator couldn’t even run in my past PC without crashing miserably, so seeing the IDX emulator spring into life effortlessly with near-zero delay was pretty exciting.
Templates are another awesome convenience — just start up any project with all the boilerplate you need — you don’t even need to fire up the CLI.
And no you’re stuck with those templates there — you can just start from a blank template and customize it as much as you want — like you would in local editors.
Another huge part of IDX is AI of course — but lol don’t think they’ll let you choose between models like all those others.
It’s Gemini or nothing. Take it or leave it.
Not like it’s nearly as bad as some people say — or bad at all.
And look, it indexes your code to give you responses from all the files in the codebase — something that’s becoming a standard feature you’d expect across editors.
And it looks like it has some decent multi-step agentic AI editing features.
I was impressed — It tried creating the React app and it failed because there were already files in the folder — then see what happened…
It intelligently knew that it should delete all the files in the project and then try again.
It automatically handled a failure case that I didn’t tell it about beforehand and got back on track.
And guess what?
After it tried deleting the files and it didn’t work — cause I guess .idx can’t be deleted — it then decided to create an empty subfolder to create the React project in.
I never said anything about what to do about non-empty files in the folder in this case, it just knew. It didn’t keep trying blindly for something that just wasn’t working.
Pretty impressive.
Okay but it did fail partially miserably when it came to create the React file.
It put the CSS code in the JSX file where it was obviously not supposed to be.
So clearly whatever model they’re using for this still can’t compare to Claude. Cause Claude-powered Windsurf would never make this mistake.
But not like the CSS itself was bad.
But of course this will only continue to improve. And Clause will also get even better — as you might have seen from the recent Claude 3.7 release.
So even if you stick with your local IDE, IDX is still a solid, rapidly improving, AI-powered alternative for writing code faster than ever.
VS Code is no longer about having an open-source lightweight editor freely available to anyone.
Now it’s about winning the AI race with GitHub Copilot.
At first they were playing it cool — Copilot was just another extension you installed like Tabnine and others.
But soon VS Code slowly started being used more and more as a marketing platform to get people to pay for Copilot.
Copilot promo in the Welcome Page — a great, hard-to-miss location right?
And if you miss that, you most certainly wouldn’t miss the Copilot button right next to the search bar.
Visible not just on the Welcome page but on any page — or no page open at all.
I knew it once I saw this button appear. Yet another app had hopped on the AI train.
Copilot was no longer being treated as just another VS Code extension, but a core part of the editor.
The face of VS Code.
When you think of VS Code, you are to immediately think of Copilot.
Well shouldn’t we have seen this coming.
No matter how much they tuned down the branding and positioned it as a free and happy open-source tool, at the end of the day it was always a corporate-owned product.
They always had the power to change the direction it would take in the future.
And with all the serious competition from Cursor and Windsurf it was only a matter of time.
Especially with these alternatives being forks of VS Code — incredibly easy to switch and still feel at home.
It was the same competition that forced them to create a free tier for Copilot.
Every single IDE is going the AI way and they have to keep up.
They’re not the only ones feeling the heat too.
JetBrains is about to make AI take the center-stage in all their IDEs with a new agentic tool called Junie.
After the disastrous performance of their AI Assistant they also realized they needed to step up their AI game.
So very soon WebStorm and Pycharm and IntelliJ are going to be all about AI.
Android Studio not left behind.
We can complain about this as much as we want but this is the direction everyone is taking now.
Many other apps like Notion and Adobe tools have also made people angry about their heavy focus on AI at the cost of other features and fixes.
But the AI wave will continue to spread.
AI in your sleep, AI when you wake up, AI for breakfast lunch and dinner. AI before bed.
AI is slowly taking over coding but many programmers are still sticking their head in the sand about what’s coming…
Now Google’s Chief Scientist just made a telling revelation: AI now generates at least 25% of their code.
Can you see — it’s happening now at top software companies with billions of active lines of code.
All these people still acting like AI-assisted coding is just a gimmick that nobody actually uses in production.
Some people in my comment sections even said that using AI tools don’t make you more productive…
Like come on — I thought we all agreed GitHub Copilot was a smash. The over 1.3 million paying users they had this time last year wasn’t enough proof?
In case you don’t know, software developers are not a very easy group of people to monetize — your tool must be really something to have over 1.3 million of them pay for it.
And even if most of these are from businesses, something tells me not every developer tool can get anywhere close to these numbers from B2B.
I remember the first time I used Copilot. Mhmm nice tool, pretty decent suggestions, not bad…
Only like a few days later when I had to code without it from connection issues — that’s when I realized just how much I’d already started depending on this tool. I was already getting used to the higher quality of life and I wasn’t even fully aware.
That Tab key for accepting completions — which key did I even press more between Tab and semicolon.
Type 48 characters — Enter
Type 60 characters — Enter
Type 55 characters — Enter
After:
Type 9 characters — Tab — Enter
Type 2 characters — Tab — Enter
Tab — Enter
The quality of life difference was undeniable. The productivity difference was undeniable.
And one thing this shows you or reminds you — that programming has always been an act of thinking, not typing.
It’s always been the thinking part that took most of the time — the higher level system planning and designing, the lower level algorithms and design patterns.
The typing has always been straightforward. And pretty mundane actually.
Copilot isn’t just a code completion tool, it’s a thought predicting tool.
It’s not just about helping you code faster, it’s about knowing what you’re thinking and eliminating the gap between that thought and its actualization in the real world. Pressing Tab instead of dozens of characters.
It’s been so useful and this is just at the lower line-by-line level — predicting your thoughts for what each line should be.
Now we are having tools like Supercomplete predicting your intent across entire files, making things even easier.
Cursor Compose and Windsurf Cascade bringing your thoughts to life across several files in your codebase.
And there are increasingly magnifying the impact and value of those thoughts.
Let’s say you want to add a searching feature to your web app.
Copilot could give you completions for each line of the UI components + event handler definition and the search algorithm or external libraries you decide to use.
Supercomplete could automatically create an empty event handler for you to start writing the definition.
But with the agentic tools you could just say, “add a search feature to the app”, and it could handle everything — all the above — including NPM library installations.
How long until you can literally say, “build and deploy an e-commerce app…” and that’ll be all it takes?
Imagine you give such a vague description of what you want and then the AI autonomously asks you questions to get the specific requirements and avoid all ambiguity.
It seems more and more like a matter of when, not if.
Adobe just changed everything for AI video generation.
Their new Firefly video model is finally here and it already has insane advantages over Sora and even Veo 2.
This is insane.
Like you’re telling me this wasn’t taken by some National Geographic photographer. This actually came from an AI.
So where exactly are we going to be a year from now?
I agree with Scarlett Johansson (lol) — we should really do something about deepfakes before it gets out of hand.
But only if we can pretend like there’s any going back from this — which there isn’t…
Massive misinformation is on its way — already here as we saw from the fake AI video of celebrities protesting against Kanye West.
At this rate soon video evidence won’t be able to hold up in court — they’ll just say it’s a deepfake right? Who would know? It’s already happening for audio, right?
From a different video generator, but this was 7 months ago! From China, ha ha ha…
With this new update Adobe now has video + image + vector generation all in the same app — genius move to gain an upper hand in this part of the AI race.
They know that the value of an AI tool isn’t just in how good it is — but how deeply integrated it is into a workflow that convenient and familiar.
Even if other tools like Sora produce slightly better videos, the deep integration of all their creating tools will still give them a major advantage.
With one ecosystem you can generate images… then edit them yourself… or with the AI…
Then transform the image into dynamic videos and scenes… Then create audio for the scenes from text…
Then translate the audio in the scenes to several other languages — syncing the lips perfectly (wild stuff).
Then you can still create fresh videos from scratch with text prompts giving you precise control over the style and camera angles…
And you can still edit them yourself for total precision…
The integration is simply unbeatable — and they already have a strong user base of many millions.
Everything is so connected and cohesive from conception to production.
And they realize this and so they tie everything together with Firefly in their Creative Cloud subscription.
And of course you’ve still got the standalone subscription for general users, so they still compete directly with Sora and Veo 2.
And once again they claim to have only used “licensed” and “public domain” content for the training, just like the did for the image generation model.
Of course hardly anyone is going to care about this when choosing tools, ha ha.
But I guess it’s a nice way to virtue signal and appear morally superior to lawsuit-ridden rivals like Midjourney and OpenAI, lol.
It’ll just be really interesting to see how good these tools are gonna get.
Imagine: complete hyper-realistic, engaging, comprehensive 2-hour long movies from a few paragraphs of prompts.
The makers of WebStorm and IntelliJ have zero intentions of being left behind in the race for the ultimate coding AI IDE…
Enter Junie — an incredible upcoming coding agent (that word again) from JetBrains that could make you regret ever paying for Windsurf or Cursor (or Copilot, ha ha).
The agent will understand your code on a deep level and make high-level changes — seriously line-by-line coding is becoming a thing of the past guys…
And of course this will directly compete with the incredible Windsurf Cascade feature — and Cursor Composer.
But it looks like Junie will go even further with deep understanding of context and learning your unique coding style to keep things consistent.
It’ll even be able to automatically create and run tests for your code in a structured way.
Similar to all those testing extensions for VS Code.
But even better cause this would work for several languages.
Any time you tell it to make a change it would run the tests automatically and ensure that the changes it made are correct — something you can also verify yourself of course.
Looks like it could even create its own tests for the specific changes it makes to your code…
You see with tools like this, the developer role will shift a lot from typing and code monkey-ing, to very high-level direction and monitoring.
You may still need to learn programming languages, but mainly for keeping the AI in check and avoiding errors.
And eventually even this could be done by the AI itself.
JetBrains certainly has a key advantage here with their years of experience creating coding tools.
They also have a strong user base of over 11 million people so there’ll be instant adoption once Junie goes live.
People already familiar with the Jetbrains IDE experience would have no reason to switch to VS Code forks like Cursor and Windsurf.
I also wonder whether they’ll use their custom model or they’ll let devs switch between the big names.
All the current players have gone the latter route, so it’ll be interesting to see how that goes.
Of course this isn’t their first time jumping on the AI bandwagon.. they’ve already released an AI coding assistant before… but looks like it failed miserably.
Junie is clearly a rebrand to distance themselves from the flop, especially with the fresh new agentic abilities it’s gonna have.
OpenAI just dropped a brand new model to try to rise above the DeepSeek craze and get back into the spotlight.
And it looks like they’ve been pretty successful with that…
o3-mini breaks boundaries and re-overtakes DeepSeek in key areas, including in this incredibly tough AI benchmark that no other model could reach even 10% accuracy in…
Doubling down on their questionable naming scheme to release o3 mini — a seemingly faster and smaller version of the o3 that came out a few weeks back.
And the biggest deal here is the price — it’s free to use.
Showing just how major the DeepSeek blow was — from confident delusions of a $2000 per month subscription for o3, to now letting anyone access it.
They’ve gotten a huge huge reality check from the competition.
Now they’re going to be thinking long and hard about pricing anytime a new model comes out.
So they’re actually 3 sub-tiers for the o3 mini-tier: low, medium, and high.
Pretty cool to see smaller o3-mini outperform even the default version of o1 in areas like coding and math.
But DeepSeek is still better in certain areas — like the coding:
But o3-mini did outclass DeepSeek on an extremely difficult AI benchmark called Humanity’s Last Exam (HLE).
HLE is basically a set of 3,000 questions across dozens of subjects like math and natural science — it’s like the OG benchmark to check just how advanced a model is at reasoning and knowledge.
So it’s free for everyone right now, but of course not exactly…
It still has the cap like all the other super expensive models like o1 — even paying Plus users have this cap.
And you only get o3-mini low and o3-mini medium for free.
Sneaky clever naming — they can still technically say they made “o3” free, when you don’t even get the best sub-tier of the mini-tier of o3.
But I guess this just means we’ll be getting a new DeepSeek model soon (lol)? Since they apparently trained their r1 model with tons of OpenAI model data.
And couldn’t OpenAI also “steal” this new DeepSeek training method to create a derived model using o3 data? That could even be more accurate than DeepSeek since they’d have access to the data directly.
It’s clear everyone is really buckling up now… Google too announced a model shortly after the DeepSeek news — Gemini 2.0 Pro.
Simply couldn’t ignore the speed at which this thing blew up… #1 in App Store and Play Store in like how many days? No way…
One thing we can see this — investing so much money to stay ahead of the AI race and maintain market share, MAY not be worth it after all.
If it’s always going to be this easy for competitors to catch up to new models and with lower prices, will they ever be able to eventually get an worthwhile ROI in the future from all this investment?
How much longer can they continue to justify the value to investors and keep getting all those billions and billions?
And clearly there’s no other real moat to keep users locked in, as we saw with how people rushed to download DeepSeek.
It may not be long before we starting seeing ChatGPT experience enshittification as they try to squeeze out any cash possible.
Maybe we’ll start seeing ads in ChatGPT like Bing Chat does now, ha ha! Even kind of surprising to see how Google’s Gemini still hasn’t gotten any ads at this point.
Unless they figure out how to cut model costs drastically, they’re going to have to keep burning more cash while still keeping prices as low as possible — hoping not to get outshined by the competition so they can keep getting more cash to burn.
One thing’s clear: the next few months will decide a lot.
Big Tech truly got the shock of their lives from China.
They really thought they were light years ahead of everyone else just because they had all the money in the world.
Much cheaper yet more accurate
But DeepSeek just taught them a lesson never to forget.
After these tech giants blindly poured all those billions and billions of dollars into their models in desperate attempts to stay ahead in the AI race.
DeepSeek spent just a tiny tiny fraction of that — less than $6 million — to train a model that destroys 97% of all the major models like GPT-4 and Gemini in every way.
And far far cheaper to run too — China 😅
You easily see how DeepSeek is by far the most cost-efficient of all the major models.
And not just relatively efficient but more intelligent on an absolute measurement.
Only o1 can compare — and you can see just how ridiculously expensive it is — just look at the crazy gap to DeepSeek and all the rest.
DeepSeek is at least 20 times cheaper than o1 and yet matches it in every way.
Well well well.
So all those heavily funded genius computer scientists working on all those models — got thoroughly outclassed by a tiny side project from like 50 random guys from China.
And then the final nail in the coffin — open-source and free to use.
These tech giants tried so hard to keep the inner workings of all their fancy models from the public — so many trillions to made from being the first and only to achieve and control the holy grail of superintelligence, right?
Lol remember when OpenAI used to actually be open…
But now this one-year-old startup just came out of absolutely nowhere and crashed the entire pro-profit party.
Not just open-source but with MIT license — meaning you get to do basically whatever you want with it.
The entire algorithm is all out in the open for everybody and their dog to see. And test and run.
Many users have already been talking about how much more creative and clever the DeepSeek feels compared to ChatGPT.
With all of this it wasn’t surprising to see their official apps rocket to the top of the charts on both app stores.
It’s funny how all this comes just a few days after the so-called Stargate Project that’s costing as much as $500 billion dollars.
These huge US tech companies have been swimming in so much cash and have been getting lazy.
Their main focus seemed to be just pumping in as much cash as possible to fatten up their models — and then hoping that the models just keep improving from getting bigger and bigger.
GPT-3.5 — 175 billion paramters
GPT-4 — 1.8 trillion parameters
GPT-4 was largely better but was it anywhere close to TEN times as better? Of course not. It seemed even worse at some tasks.
Instead of trying to improve how they train the models and looking for ways to improve on the transformer LLM architecture.
They just kept doubling down on model size and raw computing power
Gobbling up chips from Nvidia and shooting their stock price to the moon ($600 billion wiped out in the last few days btw)
Trying to build massive NUCLEAR-powered data centers (really?)
Now DeepSeek just educated them on how much better a model with the same resources can be with superior training methods.
It’s a wake-up call that spread panic across the US stock market.
The disruption will hopefully send back more researchers back to the drawing board to focus on what matters, leading to more solid AI progress across the board.