LittleTimothy 4 days ago

Reading the original bug report and the response by the maintainers.... yeeash. They are way more tolerant than they should be. At this point I feel the structure and verboseness of the original report are just massive flashing red flags that this is AI slop. The verbosity and detail are immediately and obviously out of line with the complexity - a better report would've been "Yo dawg I think there's a buffer overflow in this strcpy". The first thing I would've replied is "Are you sure" because I think the AI would've immediately done the classic "You're absolutely right that..." without anyone bothering to look at the code.

I think the natural response will just be lower responsiveness from the maintainers to anonymous reports.

  • Trasmatta 4 days ago

    Even the responses were so obviously written by AI:

    > I used to love using curl; it was a tool I deeply respected and recommended to others. However, after engaging with its creator, I felt disrespected, met with a lack of empathy, and faced unprofessional behavior. This experience has unfortunately made me reconsider my support for curl, and I no longer feel enthusiastic about using or advocating for it. Respect and professionalism are key in any community, and this interaction has been disappointing.

    Some of the maintainers tried to keep engaging at that point, but it's so clearly just ChatGPT!

    • Swizec 4 days ago

      Note that this also sounds exactly like the median corporate email. I could see my coworkers writing this. The higher into middle management they are, the likelier.

      • namaria 2 days ago

        Yeah corporate tools all sound alike

    • Bluestein 4 days ago

      > This experience has unfortunately made me reconsider my support for curl

      ... "delving deeper ..." :)

  • LtWorf 4 days ago

    What's the difference between anonymous and an account used by a machine?

    • a_wild_dandan 4 days ago

      friction

      • LtWorf 3 days ago

        They sells stars for nothing. There's loads of automated accounts in use today already.

nfriedly 4 days ago

I maintain a handful of Open Source projects written in JavaScript and TypeScript, a couple of which are fairly popular, and I don't think I've seen any of this. Maybe it just hasn't reached the JavaScript world yet?

The one project is a rate limiter, and for a while I was getting a fair number of bug reports that boiled down to configuration mistakes, such as accidentally rate limiting the load balancer/reverse proxy rather than a specific end user. I implemented a handful of runtime checks looking for common mistakes like that, each logging a one-line warning with a link to a wiki page that gave more details. Since then, the support burden has come down dramatically.

  • ajross 4 days ago

    Zephyr isn't getting any either that I've seen. The projects in evidence in the article are python and curl, so it's likely limited to only the highest profile targets.

    What would be interesting is who's doing it and why. The incentives wouldn't seem to be malicious, there's no attempt a-la xz-utils to boost credentials for a real human. Honestly if I had to guess it's an AI research group at Microsoft or wherever trying to tune their coding bots.

    • techjamie 4 days ago

      Curl offers a monetary reward as part of their bug bounty program, so that is a contributing factor in their case.

      It seems to me like talentless hacks looking for ChatGPT to get them easy money/cred without any actual work.

      • nfriedly 4 days ago

        Yeah, that's probably part of it - none of my projects have any bug bounty.

  • BeetleB 4 days ago

    [flagged]

    • dang 3 days ago

      Please don't do this here.

gauge_field 4 days ago

I had encountered several spamming bots. In all instances, I reported the issue to github support. They were very quick (beyond my expectation) to respond. I suggest others do the same. It has been really easy and affective as far as my experience goes.

JoeAltmaier 4 days ago

Who thinks they are contributing something when they do crap like that? Set up an AI to bomb a group working for free to do something they value. Gotta be a kid with no sense.

  • hyhconito 4 days ago

    I know a guy who does this. He finds a problem, then tells ChatGPT about it. Then ChatGPT elaborates it into dross. He says look at the magical output, without reading it or bothering to understand it. Then posts it to the vendor. The vendor gets mislead and the original issue is never fixed. Then I have to start the process again from scratch after 2 weeks are wasted on it.

    The root cause is that LLMs are a damage multiplier for fuckwits. It is literally an attractive hammer for the worst of humanity: laziness and incompetence.

    I imagine that could be weaponised quite easily.

    • Bluestein 4 days ago

      > The root cause is that LLMs are a damage multiplier for fuckwits.

      Reminds me of Eco's quote about giving the "village idiot" a megaphone. But, transposed to the age of AI.-

      • hyhconito 4 days ago

        It's much worse than that. It's giving the village idiot something which turns their insane ramblings into something that is incredibly verbose and sounds credible but inherits both the original idiot's poor communication and the subtle ramblings of an electronic crackhead in it.

        Bring back the days of "because it's got electrolytes" because I can easily ignore those ones.

        • fakedang 4 days ago

          To quote another frontpage article, it transforms the village idiot into a "Julius".

          • hyhconito 4 days ago

            Oh shit I just read that and am utterly horrified because I've been through that and am going through it. I have instantly decided to optimise myself to retire as quickly as possible.

            • fakedang 4 days ago

              Don't worry, you're not alone. I'm in the same boat. :)

      • jgalt212 2 days ago

        The culture is been driven by village idiots.

        • Bluestein 2 days ago

          Since ...

          ... at least when villages were prevalent. Or, earlier ... :)

    • nullc 4 days ago

      > I imagine that could be weaponised quite easily.

      I've been dealing with a vexatious conartist that has been using chatgpt to dump thousands of pages of garbage on the courts and my legal team.

      The plus side is that the output is exceptionally incompetent.

      • vouaobrasil 4 days ago

        > The plus side is that the output is exceptionally incompetent.

        It won't be for long. This is reminiscent of the development of the first rifles, which often jammed or misfired, and weren't very accurate to a long range. Now look at weapons like a Barrett .50 cal sniper rifle -- that's what AI will look like in 10 years.

        • rsynnott 3 days ago

          Ah, yes, AI jam tomorrow.

          (Though, perhaps an unusually pessimistic example of the “real soon now, it’ll be usable, we promise” phenomenon; rifles took about 250 years to go from ‘curiosity’ to ‘somewhat useful’).

        • hyhconito 4 days ago

          I keep hearing this but the current evidence, asymptotic progress and financials say otherwise.

          • RevEng 2 days ago

            Technological advances tend to happen in jumps. We are no doubt approaching a local optima right now, but it won't be long until another major advancement propels things forward again. We've seen the same pattern in ML for decades.

            • probablybetter 2 days ago

              Please name me one technological advance of major import in the fundamental transformer kernel space that has occurred in the last decade that has any import at all on today's LLMs.

              I will wait.

              • RevEng 9 hours ago

                The very idea of the Transformer architecture. Surely you've heard of "Attention is all you need".

          • vouaobrasil 4 days ago

            I guess what you are saying would probably have been said by AI skeptics in the 70s, but LLMs provided a quantum leap. Yes, progress is often asymptotic and governed by diminishing returns, but discontinuous breakthrough must also be factored in.

            • probablybetter 2 days ago

              Please tell me what quantum leap was provided by LLMs. Please inform me of any developments that made current LLMs possible.

              I contend that there are none. Witness the actual transformer kernel technologies over the last 20 years and try to find a single new one.

              Neural Networks? that's 90's technology. Scale is the only new factor I can think of.

              This is an investor-dollar driven attempt to use brute-force to deliver "magical" results when the fundamental technology is being mis-represented to the general public, to CTOs, and even to Developers.

              This is dishonest and will not end well.

              • imtringued a day ago

                The biggest capability jump comes from semantic search. You can now search based on the content of a text rather than a literal character level match.

                • Bluestein 15 hours ago

                  I am slowly coming 'round to the same conclusion: Word2Vec might be as fundamental as fire - all due caveats aside, of course ...

        • probablybetter 2 days ago

          it is unlikely that the output of LLMs will improve. there is no fundamental breakthrough in transformer technology (or anything else) powering todays' LLM "revolution"

          There is only scale being employed like never before => vast datasets being plowed through being sufficient to provide the current illusion for the less observant humans out there...

          10 years from now this current fad of LLM's pretending to be intelligent will look preposterous and unbelievable: "how COULD they all have fallen for such hype, and what a cost of joules/computation... the least deterministic means possible of coming to any result... just wasteful for no purpose..."

        • m463 a day ago

          On the other hand, firearms were invented in the 10th century, and I don't think we got reliable cartridges until mid 1800's.

          All that time the bow and arrow were long-range, accurate, quiet and worked in the rain. :)

          [But yeah, you're right - in our lifetimes technologies have changed immensely.]

      • delfinom 2 days ago

        And the judge hasn't sanctioned said conartist yet?

        • nullc 14 hours ago

          He was just awarded a prison sentence, but it's suspended for two years on the condition that he doesn't bring any more of these cases. He's left the jurisdiction and has been hopping around extradition havens, so an arrest order would just have the effect of ending the court's ability to influence him.

          He's already announced that he's going to attempt to appeal. I expect that similar to his recently rejected appeal he'll file another few thousand pages of chatgpt hallucinations in this one.

      • Bluestein 4 days ago

        Goodness gracious. Are we getting to DDoJ? (Denial of Justice by AI?) ...

        ... getting to an "AI arms race" where the team with the better AI "wins" - if nothing else by virtue of merely being able to survive the slop and get to the actual material - and then, of course, argue.-

        • RevEng 2 days ago

          Flooding the court system with frivolous charges and mountains of irrelevant evidence is already a common tactic for stalling justice. Sometimes it's just to make the other side run out of money and give up. Sometimes it's an attempt to overwhelm the jury with BS so they can't be confident in their conclusion. And more recently, we've seen it used to delay a decision until they are elected and therefore immune.

    • RajT88 3 days ago

      My wife has a coworker like this.

      Except I stead of bug reports, he just gets some crap code written and sends it to her assuming it can be dropped in and ran. (It is often wrong)

    • rsynnott 3 days ago

      But _why_? What’s his motivation for doing this, vs just writing a proper report?

  • ozim 4 days ago

    Or it might be state threat actor trying to tire out people and then plant some stuff somewhere in between. Like Jia Tan but 100x more often or then getting their people to “help” cleaning up.

    You just underestimate evil people. We are long past “bored kid in his parents basement in a hoodie”.

    Any piece of OSS code that might end up used by valuable or even not so valuable target is of interest for them.

    • Bluestein 4 days ago

      I have got to say that - on a first, naive, approach - the whole situation hit me in a very "supply chain attack" way too.-

  • pimlottc 4 days ago

    People do things like this because it makes your gamified GitHub metrics go up.

    • marcus0x62 4 days ago

      I call those drive-by PRs. If you work on an even moderately popular project, you’ll end up with people showing up - who have never contributed before - and submitting patches for stuff like typos in your comments that they found with some auto scanning tool.

      As far as I can tell, there are people whose entire Github activity is submitting PRs like that. It would be one thing if they were diving into a big codebase, trying to learn it, and wanted to submit a patch for some small issue they found to get acquainted with the maintainers, or just contribute a little as they learned, but they drop one patch like that, then move on to the next project.

      I don’t understand the appeal, but to each their own, I guess.

      • exsomet 3 days ago

        Genuine curiosity - admittedly these sorts of “contributors” probably aren’t doing it out of a passion for FOSS or any particular project, but if it’s something that fixes an issue (however small) is that actually a net negative for the project?

        Maybe I have some sort of bias, but it feels like a lot of the projects I see specifically request help with these sorts of low-hanging-fruit contributions (lumping typos in with documentation).

        • marcus0x62 3 days ago

          I don't think these PRs are a net negative for a project - I've never turned one down, anyway. I just don't understand what the contributor gets out of the arrangement, other than Github badges. Some theories I've held:

          1) They want/intend to contribute more and are starting small, but either get overwhelmed, lose interest, don't have enough time, etc.

          2) They are students padding their resumes (making small PRs so they can say they contributed to x-number of open source projects.)

          3) Its just the Github badges.

          • Suppafly 2 days ago

            >1) They want/intend to contribute more and are starting small, but either get overwhelmed, lose interest, don't have enough time, etc

            Yeah that's essentially how people are encouraged to start contributing to open source projects, making small changes and cleaning up documentation and such. It's hard to allow for this while also preventing the other two categories.

      • Suppafly 2 days ago

        >submitting patches for stuff like typos in your comments that they found with some auto scanning tool

        Wonder how long it'll be until we see wars back and forth between people submitting 'corrections' for things that are just spelled different between dialects like we see on Wikipedia.

        • Bluestein 2 days ago

          ... and then maintainers having to come up with "style manuals" to codify a project's preference and avoid this ping pong ...

    • vouaobrasil 4 days ago

      This is exactly why it's a bad thing in general to have a single metric or small group of metrics that can be optimized. It always leads to bad actors using technical tools to game them. But we keep making the same mistake.

    • LtWorf 4 days ago

      You could just do it onto fake projects created for metrics as well, so nothing real is harmed :D

  • kichik 4 days ago

    They might be looking for some open-source fame. The contribution to their resume is more important than the contribution to the project.

    • 0points 4 days ago

      I fixed a single word typo in a doc string in github.com/golang/go, resulted in a CONTRIBUTORS entry and a endless torrent of spam from "headhunters".

    • ramon156 4 days ago

      This was an issue without LLMs too, and it sucks. GH has a tag for "good first issue" which always gets snatched by someone who only cares about the contribution line. Sometimes they just let it sit for weeks because they forgot that they now have to actually do the work.

  • janice1999 4 days ago

    It's people, usually students, trying to pad out their GitHub activity and CVs.

    • Bluestein 4 days ago

      It's an insidious incentive, in an age where an AI is going to look through your CV, and not much care - or be able to tell the difference ...

      • esperent 4 days ago

        If the AI was told to care, identifying low grade or insignificant contributions is well within it's capabilities.

        • vouaobrasil 4 days ago

          Not in 10 years when the contributions become more sophisticated. Like many other scenarios of this time, it's an arms race.

          • esperent 4 days ago

            If the contributions become sophisticated enough to be actually good, then problem solved, right?

            • vouaobrasil 3 days ago

              No, because the arms race will be using huge amounts of energy and resources so no, not problem solved.

  • codedokode 4 days ago

    On Hackerone you can get money for a bug report, could that be the reason? I think that the first sentence in report is probably written by a human and the rest by AI. The report is unnecessary wordy, has typical AI formatting and several paragraphs with detailed explanation of "you are absolutely right" are signs of a LLM.

    • billy99k 2 days ago

      I've been active on Hackerone for a decade. A good report written by a human has issues making it through. These AI written reports have no chance.

  • RicoElectrico 21 hours ago

    It's a cultural thing. In the absence of opportunities for good education and jobs cargo culting kicks in.

  • Bluestein 4 days ago

    I am trying - honestly - to wrap my head around it ...

    Who knows. Might be some sort of "distributed attack" against Open Source by some nefarious actor?

    I am still thinking about the "XZ Utils" fiasco. (Not implying they are related, anyhow).-

    • llamaimperative 4 days ago

      Have you been on the internet? There are plenty of just plain moronic people out there to produce (with the help of LLMs) ample bullshit to be able to clog approximately any quasi-public information channel.

      Such results used to require sophistication and funding, but that is no longer true

      • anon373839 4 days ago

        I’d like to inject a personal gripe here, namely: the people who take the time to answer questions on Amazon with “I don’t know.” Why.

        • stonogo 2 days ago

          Because Amazon sent them am email. When questions go unanswered, Amazon emails some of the prior purchasers asking them the question. These emails are constructed to look like they are personally being asked for help.

          So, why? Courtesy, believe it or not. Blame Amazon.

          • anon373839 20 minutes ago

            Oh, that makes total sense.

      • Bluestein 4 days ago

        Awful. See comment about "morons" upthread ...

    • eddsolves 4 days ago

      Na, it’s not intentionally malicious - people are just trying to pad their resumes for new roles while unemployed or as students. I did the same (not with AI, but picking up the low hanging easy tasks to add a few lines to my CV years ago).

      • aziaziazi 4 days ago

        Wonder if you had to give more detail on those experiences during the interviews? How did it go?

      • Bluestein 4 days ago

        Thanks. Yes, this as a modus, is becoming apparent from the threads.-

    • 0points 4 days ago

      No need to go there.

      There is a more simple explanation and it is being discussed in the comments.

      Kids trying to farm github fame using LLM:s.

Kelvin506 3 days ago

Given that LLMs aren't able to properly understand code, would it be feasible and useful to create AI honeypots?

For example, add some dead code that contains an obvious bug (like a buffer overflow). The scanbots catch it, submit the PR, get banned.

iFire 2 days ago

The vast number of godot engine issue reports aren't junk bug reports. The ones that are ai are so rare that they're notable to me, like a rare shiny. There are 100815+ (issues/pull requests).

I'm more curious to see what they look like.

prepend 4 days ago

It seems like the answer to this is just reputation limits. There’s not a good “programmer whuffie” system, but I imagine requiring an active GitHub account over a year old will reduce instances.

And then post the email address and handle of spam submitters so they are found when potential employers google them.

I will always google applicants as part of the interview process. If I found they had submitted this trash, it would really harm their chances that I hire them.

  • calvinmorrison 4 days ago

    If someone doesn't have a github ill be more impressed.

    • aziaziazi 4 days ago

      Would you share why so if you don’t mind?

      • LtWorf 4 days ago

        He's from silicon valley, parents create github accounts to their children before they are born there. /s

ksajadi 4 days ago

We also get those a lot for our service. I’m not sure if they are AI generated but many are low quality. The problem is as part of our processes we are required to respond and triage every report and keep an audit trail for them.

As a result I started project to use various fine tuned LLMs to do this part for us. I guess this is a case of needing a bigger gun to deal with the guys with gun!

  • Bluestein 4 days ago

    "The only thing that stops a bad guy with an AI is ... yadda yadda :)

Havoc 4 days ago

Feels like a prelude to similar issues that will crop up in other areas of society. I’d say most processes are not resilient against this

  • aprilthird2021 4 days ago

    The worst thing is that AI will make it even harder for human beings to talk to other human beings for support or to fix problems, because the bad faith actors AI-DDOSing every channel will cause businesses to take precautions to avoid spending actual money on responding to AI garbage

hrthagf 4 days ago

Ironically, CPython itself is already inundated by junk bug reports from core developers themselves, some of whom cash in on "fixing" the fake issues.

Or sometimes bugs are introduced by the endless churn but then attributed to someone who wrote the original bug-free code, which leads to more money and (false) credit for the churn experts.

  • kosayoda 4 days ago

    Do you have a source for this claim? I'm curious

bhouston 4 days ago

Can we also automate the response to these via AI?

Can we have AI bug responses? So by default Github assesses each bug reporting using AI and gives us a suggested response or analysis? If it is a simple fix, just propose a PR? If it is junk or not understandable, say so? And if it is a serious issue confirm that and raise awareness.

I want to move towards self-maintaining Github repositories personally. I should only be involved for the high level tasks or gatekeeping new PRs, not the minor issues that are trivial to address.

We need to not simply fight AI, but rather use it to up level everyone.

  • ThrowawayR2 4 days ago

    People buying LLM services to combat a problem other people created by buying LLM services, incentivizing the latter to try harder by buying more LLM services? A perfect vicious circle. The LLM providers will surely be laughing all the way to the bank.

  • vouaobrasil 4 days ago

    > We need to not simply fight AI, but rather use it to up level everyone.

    This is an arms race. And unlike traditional arms, because it involves intellectual capabilities of a machine, there may be no limit to the race. It does not sound like a good world in which everyone is fighting everyone else with advancing AIs that use increasingly more energy to train.

    It's the mechanization of the broken window fallacy.

  • LtWorf 4 days ago

    > Can we also automate the response to these via AI?

    How to make github issues entirely useless :D

Uptrenda 4 days ago

Yeah, if you think thats bad I once had someone submit a pull request linting my entire projects code. It was like a single 20k+ line change. I would have had to have read every line changed to make sure it wasn't malicious to merge the changes from that one command. I decided in the end it wasn't worth the effort and rejected it.

  • LtWorf 4 days ago

    Eh some idiot did a similar thing, 1 commit that was changing basically the whole project.

    I told him to do several commits, and they were just… the same shit but arbitrarily divided into several commits, no logical separation, no way to reject a commit and accept another one.

    I said I wasn't going to accept that crap and he got offended.

anonnon 4 days ago

This is not a new thing. A decade ago, when I was more active in OSS, I remember occasionally seeing bizarre posts on our mailing lists that had a distinctly Markovian feel that included hallucinated snippets of code. In some cases they used completely unnecessary and out-of-character (for our lists) profanity. These posts were often plausible enough that they usually netted a legitimate reply or two before someone pointed out the OP was a bot.

The goal in some way or another seemed to be spam, either getting access to email addresses, or access to some venue (I guess an issue tracker?) where spam could be posted.

  • 0points 4 days ago

    I have seen a few cases in the last couple of years of FOSS bug reports where the author has used PVS-Studio or similar static code analysis tools, and makes a big deal about perceived issues, without really understanding what's going on.

    These are not LLM at all, but its the same general issue in that it takes 10 second to generate a report but takes days or weeks to comb through all the noise for the FOSS maintainers.

    Most recently, this one https://github.com/hrydgard/ppsspp/issues/19515

  • Bluestein 4 days ago

    > Markovian feel that included hallucinated snippets of code

    ... "prior art" for hallucinated (confabulated) code ...

    PS. Sometimes methinks any "moderation"/interaction issue we might encounter nowadays, was faced/dealt with on IRC, before.-

    • avian 4 days ago

      LLMs/generative AI is a significant change to what we had to deal before. Both in terms of volume and accessibility to the common fuckwit (to borrow the term from another thread) and in terms of moderator time per interaction (because its now significantly harder to recognize this sort of content).

      • Bluestein 4 days ago

        Certainly. I do agree.-

nullc 4 days ago

Return to cathedral.

desktopninja 2 days ago

AI is a cancer ... What Would Steve Ballmer say? Mmm

aziaziazi 4 days ago

Lets start captcha+3FA for bug reports and then every single text field in the web.

rurban 4 days ago

I had only positive experiences so far. A few well-written issues and even PR's, driven by fuzzers. Not bad.

I had much worse human reports and even CVE's, which were invalid and absolute trash.

And the recent trend to do sports reports generated by ChatGPT is insulting.

7bit 2 days ago

The title is click bait and does not reflect the article in any way.

probablybetter 2 days ago

Consider the bubble already burst, in terms of developer confidence in this sort of nonsense.

omolobo 2 days ago

Meanwhile, every YC startup these days is some unimaginative variant of "let's automate this with LLMs".

Humanity racing towards maximum idiocy.