Nvidia now generates three times the amount of code compared to before AI—its customized version of Cursor is being used internally by more than 30,000 Nvidia engineers.

Nvidia is using Cursor to boost its internal code commits by 3x across 30,000 engineers
(Image credit: Cursor)

Nvidia's internal code commits have tripled since it mobilized 100% of its engineers with AI-assisted programming tools. Cursor, an IDE made by Anysphere, now enables over 30,000 developers at the company in AI code generation.

Cursor is utilized across nearly all product domains and throughout every stage of software development. Teams rely on Cursor to write code, conduct code reviews, generate test cases, and perform QA. Cursor speeds up our entire SDLC. We've developed numerous custom rules in Cursor to fully automate complete workflows, unlocking its true potential." — Wei Luio, VP of Engineering at Nvidia.

Screen full of computer code

(Image credit: Getty Images)

Beyond that, Cursor has also assisted in other areas, like debugging, where it shines at identifying elusive, recurring issues and deploys agents to fix them quickly. Nvidia’s teams are also streamlining their git workflow by applying custom rules that enable it to extract context from tickets and documentation, while delegating bug fixes to Cursor with appropriate tests for validation.

"Before Cursor, Nvidia had other AI coding tools, both internally built and other external vendors. But after adopting Cursor is when we really started seeing significant increases in development velocity," said Luio. According to him, Cursor excels at grasping the complexity of extensive, evolving databases that might overwhelm a typical person.

Speaking of which, trainees and new hires can ramp up swiftly with Cursor, as it acts like a knowledgeable guide. On the contrary, more experienced devs can now tackle other challenges that demand human ingenuity, bridging the gap between concepts and execution. It's like generative AI being employed for what it was always intended to handle: routine tasks.

Cursor concluded its presser by asserting that the "bug rates have stayed flat" despite the gains in coding volume and overall productivity. This is important because essential elements like GPU drivers, used by both gamers and professionals, depend on crucial code that is now being partly created by AI. It’s also nothing new for Nvidia, as DLSS has been operating on a supercomputer for years.

Google Preferred Source

Follow 3DTested on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Hassam Nasir
Contributing Writer
  • jp7189
    Im not developer, just a script kiddie. I tried Cursor in the past and it wasn't quite there. However, in the past 3 months everything changed with some of the newest models. Im using gem 3 pro with cursor now and it nails it everytime. The only issue i have is when my prompts arent precise enough. Sometime I have to read through the chain of thought to see where the AI misunderstood my intention and then reprompt to fix the problem.

    Im still no developer, but i can take any github project that might be close and add what I need to get a useful tool for my specific need.
    Reply
  • vanadiel007
    Ah, so this is why Nvidia drivers quality is not what it used to be.
    Reply
  • ezst036
    vanadiel007 said:
    Ah, so this is why Nvidia drivers quality is not what it used to be.
    Yes, but its also why everybody in corporate is freaking out.

    They're counting lines of code, not quality of code. But "triple the productivity" looks good on paper even if it makes the customers angry.

    They think they can always get triple the lines of code, triple the manufactured widgets, triple the miles, triple triple triple.

    The only thing we the regular joes are getting is triple the memory prices, triple the GPU prices, triple the copper prices, triple the silver prices, triple the energy prices, and triple the wait for the next gen product. And triple the bugs!
    Reply
  • hotaru251
    explains why drivers are trash now.
    Reply
  • King_V
    ezst036 said:
    Yes, but its also why everybody in corporate is freaking out.

    They're counting lines of code, not quality of code. But "triple the productivity" looks good on paper even if it makes the customers angry.
    This is where I see the problem.

    I'm especially happy with this part:
    Beyond that, Cursor has helped in other areas as well, such as debugging where it excels at finding rare, persistent bugs and deploys agents to resolve them swiftly.

    That's good... But so much mention of this:
    Nvidia now produces three times as much code as before AI
    ...
    Nvidia's internal code commits have tripled since it mobilized 100% of its engineers with AI-assisted programming tools.
    Makes me think of those stories where the brain-dead interview question of "how many lines of code have you written?" Has come up.

    More is NOT better!

    And then this:
    Cursor closed out its presser by claiming that the "bug rates have stayed flat" despite the improvements in coding volume and overall productivity.

    If bug rates are flat, and you're producing triple the code, then you have triple the bugs. This is not a ringing endorsement by any stretch of the imagination. Could've just hired triple the programmers instead of investing those mountains of money into AI, and gotten the same thing.
    Reply
  • DS426
    A 300% increase in the quantity of lines of code in the same amount of time should be concerning to anyone -- even corporate executives. A single bad line can cause a security vulnerability or suffer from a lack of optimization. Quantity does not equal quality... Even my 1st grade son knows this, lol.

    I'm sure AMD is doing this as well but hopefully in a more targeted, controlled, and verifiable fashion. AI can legitimately help developers in a lot of ways as copilots, but just spewing out source code like Niagra Falls is going to cause some real issues.

    Oh wait, that's already happening with how many driver fails during the early 50 series launch and now the past couple of months from Windows Update.

    Love the irony how nVidia's own cash cow that AI is will also be a huge stumbling block for them.
    Reply
  • vanadiel007
    The way it should be used, in my opinion, is you write a routine and then you ask AI to write a shorter, better version of it.
    Then you check what AI came up with, and use the best version whichever one it is.
    Reply
  • DKATyler
    Fairly concerned about measuring productivity by lines of code. "Triple the lines" is a negative indicator. As a dev, I've often joked that one of these days I'll eventually hit a positive line count because so many of my bug fixes involve deleting bad lines or refactoring to use existing common utility functions.
    Reply
  • bit_user
    During a recent code review, us reviewers spotted some fishy code and asked the developer about it. It turned out that he'd used AI to generate it and didn't review it very carefully, himself.

    A serious downside of more generated lines of code is that it's more code for human reviewers to review. I already spend more time reviewing colleagues' code that I'd like. AI also be used for code reviews, but it's not currently at a level where it can substitute for human reviewers.
    Reply
  • bit_user
    King_V said:
    If bug rates are flat, and you're producing triple the code, then you have triple the bugs.
    No, I think they meant that the code output has tripled but the bug output has stayed the same. In other words, there's now a third as many bugs per line of code being created. Otherwise, it'd be nothing to brag about.

    However, you might be right that the number of bugs per line is the same. That would be bad, but not horrible, so long as the increased code output truly represents ~3x productivity.
    Reply