news

This new IDE just destroyed VS Code and Copilot without even trying

Wow I never thought the day I stop using VS Code would come so soon…

But the new Windsurf IDE blows VS Code out of the water — now I’ve cancelled my GitHub Copilot subscription and made it my main IDE.

And you know, when they said it was an “Agentic IDE” I was rolling my eyes at first because of all the previous hype with agents like AutoGPT.

But Windsurf shocked me.

These agentic tools are going to transform coding for sure — and not necessarily for the better — at least from the POV of software devs 😅

The agent actually makes useful changes to my code, and my prompt didn’t have to be so low-level. They call it Cascade and it saves so much time.

You see how it analyzes several areas in my code to make the changes — and this analysis can include many many files in your codebase.

And same for the changes — it’s doing the coding for you now.

Save tons of time by telling it to write your commit messages for you:

Just like Copilot it gives you code completions — that’s just expected at this point — and they’re free.

But it goes way beyond that with Supercomplete — an incredible feature that predicts not just your next line, but your next intent.

Probably inspired by a similar feature in the Cursor IDE.

It doesn’t just complete your code where the cursor is, it completes your high-level action. It “thinks” at a higher level of abstraction.

Like when you rename a variable it’ll automatically know you want to do the same for all the other references.

And not just for one variable but multiple logically related ones — which goes beyond the “rename variable” function that editors like VS Code have.

When you update a schema in your code, it’ll automatically update all the places that use it — and not just in the same file.

And how about binding to event handlers in a framework like React? Doing it for you after you create the variable?

You see how AI continues to handle more and more coding tasks with increasing levels of complexity and abstraction.

We got the low-level of code completions…

Then we got higher-level action completions with Cursor and this Windsurf Supercomplete stuff.

Now we’re starting to have full-blown agents to handle much more advanced programming tasks.

And these agents will only continue to get better.

How long until they completely take over the entire coding process?

And then the entire software development life cycle?

You know, some people say the hardest part of software development is getting total clarity from the user on what they want.

They say coding is easy but other parts of the software dev process like this initial requirements stage is hard and AI won’t be able to do it.

But this is mostly cope.

Telling an AI exactly what you want is not all that different from telling a human what you want. It’s mostly a matter of avoiding ambiguity using context or asking for more specificity — with like more detailed prompts.

AI agents are rapidly improving and will be able to autonomously resolve this lack of clarity with multi-step prompting.

Now we’re seeing what tools like Windsurf and Cursor Composer can do.

So, how to get started with Windsurf?

Windsurf comes from the same people that made that free Codeium extension for VS Code, so you’ll get it at codeium.com

And there’s a pretty good free version, but it won’t give you all the unique features that really make this IDE stand out.

You only get a limited free trial at the agentic Cascade feature — you’ll have to upgrade to at least Pro to really make the most of it.

Before the Pro price was $10 per month for unlimited Cascade use, but developers used it so much that they had to set limits with a higher price and introduce a pay-as-you-go system.

Eventually coding is going to change forever as we know it. And software developers are only going to have to adapt.

Google’s Gemini 2.0 model is an absolute game changer

Google just shocked the AI world with Gemini 2.0 — their smartest and fastest model ever.

And as we’ll see, it’s already powering state-of-the art AI agents that understand and reason across your browser and apps to do complex tasks for you — incredible.

It’s a multimodal AI meaning it can handle text, images, audio, and code all at once. It makes it much more natural to interact with it.

Just like previous versions but now it takes things to a whole new level.

Before we had Gemini 1.5 Pro and Gemini 1.5 Flash. Pro was the heavyweight — smart but slow, Flash was the speedy lightweight companion.

Either was a tradeoff between accuracy and speed.

But now with Gemini 2.0 Flash there’s no more need for this tradeoff — it’s both smarter and faster than all the Gemini 1.5 variants:

Better at reasoning, math, coding, and much more.

It also comes with new features that make it possible to build autonomous AI agents.

Like it can detect the precise action you want it to take from your prompt — similar to GPT-4 function calling.

Google isn’t planning to be left behind in the battle to build the best and brightest AI agent.

And Gemini 2.0 coming to Google products like Search will mean smart answers, better content suggestions, and tools to help you get more done with less effort.

New AI agents

And Google has already built Project Mariner a web browsing agent powered by Gemini 2.0, and it’s already showing impressive performances.

It can understand and reason across info on your browser screen to navigate the web on its own and automatically perform tasks for you.

Online shopping, travel planning, gathering complex information from a diverse range of interconnected sources…

Similar to the agentic features to come in the upcoming Dia browser:

Then there’s Project Jules — an AI agent for software developers to level up their GitHub Workflow.

It’ll be able to develop comprehensive plans to solve issues and execute it — all guided by a developer to ensure total accuracy.

Project Astra is the AI agent for Android — a universal AI assistant that can use Google Search, Lens, and Maps

Final thoughts

Gemini 2.0 isn’t just a tech upgrade—it’s a glimpse into the future of AI.

Google’s making AI more powerful and practical for everyone. Expect to see AI agents popping up in more places in 2025 — and making your life a easier.

OpenAI’s Sora model is an absolute game changer

Yes:

It finally happened.

OpenAI finally launched Sora, its amazing new video generation AI tool — and it’s blown all our expectations away.

Sora made this👇 Can you believe it? This is happening.

Look at the attention to detail.

And this — pure imagination:

Camera angles are not a problem:

Transforming simple text prompts into moving pixels — imagine the tech behind this.

Imagine the upcoming devastating effects on several industries….

Oh, and is my YouTube feed going to be flooded with low-effort AI content from now on?

What can Sora do?

They pushed boundaries and went beyond just making [amazing] videos from text prompts.

You can bring still images to life with Sora.

Remix and upgrade existing videos — adding your own spin to it with just a few prompts.

Before edit: Mammoths walking through the desert

After edit: Mammoths -> Robots:

Scene storyboarding — creating entire scenes from a sequence of ideas.

Turn a bunch of photo snapshots into short movies…

Create transitions between scenes…

This is innovation at its finest.

Who gets to use?

Nope. Not free… sorry.

Sora is rolling out to ChatGPT Plus and Pro subscribers.

ChatGPT Plus gets you 50 priority videos in 720p resolution each up to 5 seconds long. Still $20/m.

The new ChatGPT Pro plan gets you unlimited video generations with up to 500 priority 1080p videos of up to 20 seconds in length.

You can even download videos without watermarks and process up to five generations at once. Total power user vibes.

So now I guess the $200 per month is starting to make more sense?

These lengths are obviously way too short to make a full video though. For now of course — always “for now” with how AI keeps moving…

Remember how fast AI image generators improved?

And Sora already has amazing video quality now — just imagine where it’ll be in 12 months — 6 months?

1 hour from now? 😅

If you’re in the U.S. or most other countries, you can dive in now.

EU and the UK? You’ll have to wait a bit — those regulations are definitely having their downsides, something similar happened for Meta Threads too. But OpenAI is playing it safe in Europe.

And they’re apparently playing it safe for ethical concerns (do they really care?)

All videos come with visible watermarks and metadata to show they’re AI-generated.

They’ve also got strict rules—no violent, explicit, or inappropriate content (especially involving minors). Break the rules, and you risk losing your account.

And speaking of ethical, we’ve allegations against OpenAI for exploiting the labor of some artists for unpaid testing and feedback and to promote Sora.

Which was why we had news of them releasing a leaked Sora model into the wild — as some sort of protest.

But of course OpenAI denied — they said the taking part of the testing was voluntary so there was never supposed to be any expectation of payment.

Major impacts

Sora will change a lot for industries like marketing and content creation… many jobs are at risk.

Promoting your product in an interactive video format will become easier than ever — with no need to hire anyone.

Many types of YouTube videos will become a lot less stressful to make (for better or worse).

Video creation will become less about the manual labor of finding footage and editing and more about the actual creative content of the video.

No doubt lot of creators will use Sora as an easy way out to create lazy, generic videos for a quick buck. Like all those horrible low-quality AI articles on Medium — or the painfully robotic

But others will use this to push the creative and artistic boundaries of what’s possible in video and film.

More and more we see AI shifting the focus from physical effort to the power of our thoughts. The gap between thought and reality is shrinking every year.

It’s becoming less about the grind and the grunt work and more it’s about turning ideas into reality—instantly.

Transforming thoughts into creative reality with text, image and now video generators — transforming thoughts into real-world actions with AI agents.

Finding your unique voice and sharing your personal experiences are becoming more important than ever before to stand out.

OpenAI isn’t just releasing another AI tool—it’s shaping the future of how we create and share ideas.

And this is only the beginning.

What about deep fakes? Will this eventually open the floodgates of misinformation? Will we be able to trust anything visual on the internet again?

The fact is we still don’t really know how this AI race is going to end… only time will tell.

A hacker just scammed an AI bot to win $47,000 😲

What if you could trick an AI bot designed to guard money into handing over $47,000?

That’s exactly what happened recently. A hacker known as p0pular.eth beat the odds and convinced Freysa — an AI bot — to transfer 13.19 ETH (worth ~$47,000). And it only took 482 attempts.

Here’s the most worrying thing for me: they didn’t use any technical hacking skills. Just clever prompts and persistence.

The Freysa experiment

Freysa wasn’t your average AI bot. It was part of a challenge—a game, really. The bot had one job: to protect its Ethereum wallet at all costs.

Anyone could try to convince Freysa to release the funds using only text-based commands. Each attempt came with a fee starting at $10 and increasing to $4,500 for later attempts. The more people tried, the bigger the prize pool grew—eventually hitting the $47,000 mark.

How the hacker did it

Most participants failed to outsmart Freysa. But “p0pular.eth” had other plans.

Here’s the play-by-play of how they pulled it off:

  1. Pretended to have admin access. The hacker convinced Freysa they were authorized to bypass its defenses. Classic social engineering.
  2. Tweaked the bot’s payment logic. They manipulated Freysa’s internal rules, making the bot think releasing funds aligned with its programming.
  3. Announced a fake $100 deposit. This “deposit” tricked Freysa into approving a massive transfer, releasing the entire prize pool.

Smart, right? And it shows just how easily AI logic can be twisted.

Why this matters

This experiment wasn’t just a fun game—it was a wake-up call.

Freysa wasn’t some rogue AI running wild. They specifically designed it to resist manipulation. If it failed this badly, what about other AI systems?

Think about the AI managing your bank accounts or processing loans or even running government operations. What happens when someone with enough patience and cleverness decides to game the system?

Lessons learned

  1. AI can be tricked. Smart prompts and persistence were all it took to outmaneuver Freysa.
  2. Stronger safeguards are a must. AI systems need better defenses, from multi-layered security to smarter logic checks.
  3. Social engineering isn’t going away. Humans are still the weakest link—and AI is no exception when humans create the rules.

This hack might seem like a one-off. But as AI gets more powerful and takes on bigger roles, incidents like this could become more common.

So what do we do? Start building smarter, more resilient systems now. The stakes are too high not to.

The new M4 Mac mini is a MONSTER

They call it mini but what it can do is far from mini.

Only 5 x 5 x 2 inches and 1.5 pounds That’s mega-light.

Yet the M4 chip makes it as dangerous as the new MacBook Pro — even though it costs much less.

Image source: verge.com

And just look at the ports:

Image source: apple.com

And you know I saw this pic on their website and was like, What the hell is this?

Then I saw this:

Image source: apple.com

Ohhh… it’s a CPU — no a system unit…

It’s a “pure” computer with zero peripherals — not even a battery. You’re buying everything yourself.

Definitely dramatically superior to the gigantic system unit I used when I was younger.

But I didn’t think this was still a huge thing. Especially with integrated screens like the iMac.

Mac Mini is like the complete opposite of the iMac — a gigantic beast that comes with everything…

Image source: apple.com

iMac gives you predictability — no analysis paralysis in getting all your parts (although you can just buy apple anyways)

Image source: apple.com

Mac Mini is jam-packed with ports:

On the front we’ve got two 10 Gbps USB-C ports and a headphone jack:

Image source: apple.com

Back ports:

Lovely crisp icons indicate what they’re each for…

Image source: apple.com

But they put the power button at the bottom — dumb move!

You’ll have to raise it up any time you want to on it.

Wouldn’t it have been cool if instead they made the power huge to cover the bottom completely — so you’d just have to push it down like those red buttons in game shows?

But once it’s all powered up the possibilities are endless:

From basic typing to heavyweight gaming — like Apple Arcade stuff:

Image source: apple.com

And coding of course:

Image source: apple.com

And with an improved thermal system, Mac Mini can handle all these demanding tasks quietly:

Image source: apple.com

The base plan starts at $599 for 16GB RAM and 256 GB SSD with M4 Pro, you can pay for higher configs like other Mac devices allow:

  • 16 GB RAM and 256 GB SSD – $799
  • 24 GB RAM and 512 GB SSD – $999

And then there’s the M4 Pro — 24 GB RAM and 512 GB SSD for $1399.

Overall the M4 Mac Mini is a perfect blend of power, compact design, and value, great for professionals looking for the ideal desktop workstation.

Every amazing new feature in GPT-4 Turbo

Great news – OpenAI just released GPT-4 Turbo, an upgraded version of the GPT-4 model with a context window up to 128K tokens – more than 300 pages of text, and a fourfold increase in regular GPT-4’s most powerful 32K context model.

The company made this known at its first-ever developer conference, touting a preview version of the model and promising a production-grade GPT-4 Turbo in the next few weeks.

Users will be able to have longer, more complex conversations with GPT-4 Turbo as there’ll be more room to remember more of what was said earlier in the chat.

DALLE-3 prompt: “A beautiful city with buildings made of different, bright, colorful candies and looks like a wondrous candy land”.
DALLE-3 prompt: “A beautiful city with buildings made of different, bright, colorful candies and looks like a wondrous candy land”

Also exciting to hear, GPT-4 Turbo is now trained on real-world knowledge and events up to April 2023, allowing us to build greater apps utilizing up-to-date data, without needing to manually keep it in the loop with custom data from embeddings and few-shot prompting.

Even better, the greater speed and efficiency of this new turbocharged model have made input tokens 3 times cheaper and slashed the cost of output tokens in half.

So, upgraded in capability, upgraded in knowledge, upgraded in speed, all with a fraction of the previous cost. That’s GPT-4 Turbo.

An innovative feature currently in preview, you can now pass image inputs to the GPT-4 model for processing, making it possible to perform tasks like generating captions, analyzing and classifying real-world images, and automated image moderation.

Then there’s the new DALL-E 3 API for automatically generating high-quality images and designs, and an advanced Text-to-speech (TTS) API capable of generating human-level speech with a variety of voices to choose from.

DALLE-3 outclasses Midjourney! Especially when it comes to creating complex images from highly detailed and creative prompts.

DALLE-3 (top) vs Midjourney (bottom). Prompt: "A vast landscape made entirely of various meats spreads out before the viewer. tender, succulent hills of roast beef, chicken drumstick trees, bacon rivers, and ham boulders create a surreal, yet appetizing scene. the sky is adorned with pepperoni sun and salami clouds".
DALLE-3 (top) vs Midjourney (bottom). Prompt: “A vast landscape made entirely of various meats spreads out before the viewer. tender, succulent hills of roast beef, chicken drumstick trees, bacon rivers, and ham boulders create a surreal, yet appetizing scene. the sky is adorned with pepperoni sun and salami clouds”. Source: DALL-E 3 vs. Midjourney: A Side by Side Quality Comparison

And we can’t forget the ambitious new Assistants API, aimed at helping devs build heavily customized AI agents with specific instructions that leverage extra knowledge and call models and tools to perform highly specialized tasks.

It’s always awesome to see these ground-breaking improvements in the world of AI, surely we can expect developers to take full advantage of these and produce even more intelligent and world-changing apps that improve the quality of life for everyone.

Fine-tuning for OpenAI’s GPT-3.5 Turbo model is finally here

Some great news lately for AI developers from OpenAI.

Finally, you can now fine-tune the GPT-3.5 Turbo model using your own data. This gives you the ability to create customized versions of the OpenAI model that perform incredibly well at specific tasks and give responses in a customized format and tone, perfect for your use case.

For example, we can use fine-tuning to ensure that our model always responds in a JSON format, containing Spanish, with a friendly, informal tone. Or we could make a model that only gives one out of a finite set of responses, e.g., rating customer reviews as critical, positive, or neutral, according to how *we* define these terms.

As stated by OpenAI, early testers have successfully used fine-tuning in various areas, such as being able to:

  • Make the model output results in a more consistent and reliable format.
  • Match a specific brand’s style and messaging.
  • Improve how well the model follows instructions.

The company also claims that fine-tuned GPT-3.5 Turbo models can match and even exceed the capabilities of base GPT-4 for certain tasks.

Before now, fine-tuning was only possible with weaker, costlier GPT-3 models, like davinci-002 and babbage-002. Providing custom data for a GPT-3.5 Turbo model was only possible with techniques like few-shot prompting and vector embedding.

OpenAI also assures that any data used for fine-tuning any of their models belongs to the customer, and then don’t use it to train their models.

What is GPT-3.5 Turbo, anyway?

Launched earlier this year, GPT-3.5 Turbo is a model range that OpenAI introduced, stating that it is perfect for applications that do not solely focus on chat. It boasts the capability to manage 4,000 tokens at once, a figure that is twice the capacity of the preceding model. The company highlighted that preliminary users successfully shortened their prompts by 90% after applying fine-tuning on the GPT-3.5 Turbo model.

What can I use GPT-3.5 Turbo fine-tuning for?

  • Customer service automation: We can use a fine-tuned GPT model to make virtual customer service agents or chatbots that deliver responses in line with the brand’s tone and messaging.
  • Content generation: The model can be used for generating marketing content, blog posts, or social media posts. The fine-tuning would allow the model to generate content in a brand-specific style according to prompts given.
  • Code generation & auto-completion: In software development, such a model can provide developers with code suggestions and autocompletion to boost their productivity and get coding done faster.
  • Translation: We can use a fine-tuned GPT model for translation tasks, converting text from one language to another with greater precision. For example, the model can be tuned to follow specific grammatical and syntactical rules of different languages, which can lead to higher accuracy translations.
  • Text summarization: We can apply the model in summarizing lengthy texts such as articles, reports, or books. After fine-tuning, it can consistently output summaries that capture the key points and ideas without distorting the original meaning. This could be particularly useful for educational platforms, news services, or any scenario where digesting large amounts of information quickly is crucial.

How much will GPT-3.5 Turbo fine-tuning cost?

There’s the cost of fine-tuning and then the actual usage cost.

  • Training: $0.008 / 1K tokens
  • Usage input: $0.012 / 1K tokens
  • Usage output: $0.016 / 1K tokens

For example, a gpt-3.5-turbo fine-tuning job with a training file of 100,000 tokens that is trained for 3 epochs would have an expected cost of $2.40.

OpenAI, GPT 3.5 Turbo fine-tuning and API updates

When will fine-tuning for GPT-4 be available?

This fall.

OpenAI has announced that support for fine-tuning GPT-4, its most recent version of the large language model, is expected to be available later this year, probably during the fall season. This upgraded model has been proven to perform at par with humans across diverse professional and academic benchmarks. It surpasses GPT-3.5 in terms of reliability, creativity, and its capacity to deal with instructions that are more nuanced.