I really don’t believe the headline. Google has thousands of teams of engineers that are writing code for dozens or hundreds of different products… There’s no way all of them are generating anywhere near 25% of their new code via AI.
Unless they’re doing something like generating massive test fixtures or training data sets using AI and classifying them as “code” 🤔
How often does a solution need “new” code and not “basically the same code as a previous issue but with two small details changed”? This is a genuine question, I have only ever coded as a hobby. But 25% of your work being essentially just copy pasted sounds plausible, and that’s sorta all LLMs are doing, right?
I really don’t believe the headline. Google has thousands of teams of engineers that are writing code for dozens or hundreds of different products… There’s no way all of them are generating anywhere near 25% of their new code via AI.
Unless they’re doing something like generating massive test fixtures or training data sets using AI and classifying them as “code” 🤔
I wonder if “code” means pull requests and they have a load of automated ones to update versions of external and internal libraries
The The company had a strong quarter thanks in large part to AI. part is what makes it sound strange to me, sounds like shareholder egostrking.
That said all they need to do is mandate use of AI during development like my company’s done and they can boast this kind of bullshit easily.
Wtf does that mean? Like what if you know exactly what you want to do? Do you have to ask GPT to review your code?
How often does a solution need “new” code and not “basically the same code as a previous issue but with two small details changed”? This is a genuine question, I have only ever coded as a hobby. But 25% of your work being essentially just copy pasted sounds plausible, and that’s sorta all LLMs are doing, right?