pixlmint 4 minutes ago

I wonder what will happen once these guidelines end up in the LLM training datasets

staticassertion 3 hours ago

This policy is straightforward and shouldn't be particularly controversial (I'm sure it will be bikeshedded to death though). It basically bans the obvious stuff ("don't just drop LLM generated comments onto PRs") and allows the important stuff like LLMs writing code so long as you disclose.

edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.

  • WCSTombs an hour ago

    So...big caveat that this is still under review, so what we're talking about is a moving target, but based on what I can see, it seems considerably more nuanced than that. They basically ban LLM-authored code, with a careful carve-out to run an experiment to try to get only high-quality LLM PRs:

    > It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.

    > We carve out a space for "experimentation" to inform future revisions to this policy.

    Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.

  • dgellow an hour ago

    The discussion thread in the PR is also interesting to got through, lots of people concern in the HN discussion are already well discussed there

nmg 8 hours ago

> ## Other organizations

> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.

This section is an extremely useful reference

tick_tock_tick 3 hours ago

Some of these are just straight up unhinged.

> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.

What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.

  • ZeroGravitas an hour ago

    They're trying to avoid a Boy Who Cried Wolf situation.

    If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.

    A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.

  • saagarjha 2 hours ago

    The assumption here is that people act in good faith. If you break the rules, this indicates that you are not acting in good faith, and perhaps should no longer be welcome.

  • staticassertion 3 hours ago

    What are you even talking about lol the policy doesn't imply that at all.

    That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ

    The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.

mw888 7 hours ago

Here are the actual policies, not a comment:

https://github.com/jyn514/rust-forge/blob/llm-policy/src/pol...

It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:

> The following are allowed. > Asking an LLM questions about an existing codebase. > Asking an LLM to summarize comments on an issue, PR, or RFC...

Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?

The Linux policy on this is much superior and more sensible.

  • MaulingMonkey 6 hours ago

    > Like seriously, what's the point of explicitly allowing this?

    Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.

    > It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]

    Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.

    While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.

    • bcjdjsndon 41 minutes ago

      > If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.

      What?

  • vintermann 6 hours ago

    > Like seriously, what's the point of explicitly allowing this?

    I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.

    What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.

    • davesque 4 hours ago

      So then the Rust maintainers are going to give you an F on your report card?

      • bcjdjsndon 40 minutes ago

        Try using allman braces and see how far you get on a basic issue like that

      • aabhay 3 hours ago

        No they’ll just drop() you

  • kouteiheika 6 hours ago

    > Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do?

    Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").

    I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.

    • saghm 6 hours ago

      If anything, it reads to me as a proactive rebuttal of complaints that they don't allow LLMs; they're definitively stating that they do allow using them for very specific purposes.

  • staticassertion 3 hours ago

    They're just giving examples of what you can do and explicitly saying so. Saying "you couldn't stop me" is completely missing the point.

    This is not very different from the Linux kernel's policy so it's an odd comparison. It's actually almost identical in practical terms.

    edit: lol proof that this doc needs to be stupidly explicit is in the pudding with the HN comments going out of their way to radically misread it

  • davesque 4 hours ago

    It feels telling that it reads like university course guidelines.

    • dgellow an hour ago

      What do you mean?

DennisL123 7 hours ago

Does the policy fix the issue of many low quality PRs being submitted? Unlikely.

Will it fix a related but different problem? Likely.

  • TazeTSchnitzel 4 hours ago

    The people who submit low quality LLM-generated PRs often don't bother to read the policies first, but at least it will be easier to reject those.

    • saagarjha 2 hours ago

      Ok but what if their OpenClaw reads it for them

classified 6 hours ago

This is highly interesting. It seems clear to me that a lot of thought and work went into this. If I ever were to write a similar document, I'm sure I could learn a lot from this one. Props to the authors and all involved.

afdbcreid 4 hours ago

Note that there are currently several proposed policies (plus hundreds of discussions mostly in private channels), and frankly I'm not sure we'll ever reach a consensus (I'm a Rust project member).

aabhay 2 hours ago

Kudos to the team for this. I think it’s brave of them to stand up for their own experiences and push back against the hype train.

Before you knee jerk hate on the team for being luddites, consider:

1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing. 2. The code base needs to be highly hermetic because it’s load bearing across the global economy 3. Most changes are only relevant if they’ve followed extensive process, including community feedback.

prashantk_ 4 hours ago

On a general note, I like vouch by mitchellh.

> People must be vouched for before interacting with certain parts of a project (the exact parts are configurable to the project to enforce).

https://github.com/mitchellh/vouch

I think many projects will adopt this instead of allowing everyone / blocking everyone

Many projects have "ai slop" check in place to directly close and ban user if it is "ai slop". Else, it will be hard to handle the velocity of PRs

  • Chris2048 4 hours ago

    Maybe a network of ppl who can vouch they meet in real life?

    I don't know if having your name/ face a secret is still acceptable? Maybe tiers of devs (anon vs other) on that one?

spprashant 9 hours ago

Github just won't respond at all.

ares623 8 hours ago

Oh no where is Bun gonna be ported to next?

  • lifthrasiir 8 hours ago

    Nothing. You can always vibe-code in Rust even when the rust-lang/rust repository itself largely forbids vibe coding.

    • staticassertion 3 hours ago

      > even when the rust-lang/rust repository itself largely forbids vibe coding.

      This policy does not seem to forbid vibe coding?

      • lifthrasiir 32 minutes ago

        It does in the narrower sense of vibe coding (as opposed to more general agentic coding, which is also called vibe coding from time to time...).

        > Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.

        Vibe coding (in its original meaning) would have hard time arguing it's of high quality.

      • dgellow an hour ago

        I read it as a hypothetical

    • voidhorse 8 hours ago

      But one of the reasons they switched was because the compiler upstream for the original language they used, Zig, wouldn't accept slop contributions they wanted to make for Bun perf. What will they do when they need to try to push a slop contribution upstream to rust?

      At this point they will probably just fork yet again and maintain some vibe compiler.

      • ares623 6 hours ago

        Huh. I wonder if the original intent was to merge an AI generated PR to a high-profile project like Zig. It makes the headlines and generates hype. But that went embarassingly bad for them so they had "port Bun to Rust" as a backup.

      • whattheheckheck 7 hours ago

        They should make FullstackLang. It compiles English in .md to machine code that can directly run on the specialized hardware it designs for it that you have to 3d print at runtime. Every program gets its own custom hardware. Composability and reuse be damned. Pay the token masters for every thought you have

triyambakam 3 hours ago

Saying "LLM" now sounds dumb. Just say "model". Some are no longer "large" and that is arbitrary.

7e 8 hours ago

[flagged]

  • giancarlostoro 8 hours ago

    The term scope creep comes to mind. Programming languages do not need to grow exponentially 24/7, its okay to let it grow slowly and stay mature and secure. If Rust were too bleeding edge, the safety promises would corrode over time. I think a better use of some of those PRs is to focus on crates as proof of concepts for things that could benefit Rust if it were included either in the standard library, or just available as a crate you can use for programmer ergonomic reasons.

  • grey-area 6 hours ago

    Please do fork Rust and maintain it for the LLM true believers. I’m sure the real rust team would be delighted to see fewer low-effort PRs.

    Given what you’ve said above it would be an easy task ‘accelerating quality and features exponentially’, so you’ll soon be able to show them (perhaps within days!), the error of their ways.

    Please go do it now, we’ll wait.

  • mw888 7 hours ago

    That's an ambitious conclusion, and not as overly so as some may think.

    But I believe it is not the reason Rust adopted this policy, I think they just have a more basal and subjective dislike of AI irrespective of whatever truth you may have just cited.

  • fgfarben 7 hours ago

    It doesn't really read like a Luddite policy.

    Rust is already well past 1.0. At best an LLM could discover a vulnerability (and the human using it can file a patch) or can help a human improve ergonomics.

  • voxl 8 hours ago

    LLM delusion is insufferable. If all it takes is tokens to make a significantly better in programming language in logarithmic time why hasn't anyone done it?

    • cornholio 7 hours ago

      As someone who's vibecoding my own self-hosted language (via a typescript to c++ transpiler and bootstrap), I can tell you mainline commercial models like Opus 4.7 aren't quite there yet. I'm getting 10KB source files balloon into 80MB outputs for now.

      The main problem is that the the problem space is vast and highly interconnected, the LLM needs to reason about the entire language every time it suggest an architectural change, but it can't, so it suggests local changes that make sense to me - a language hobbyist - then runs into much more difficult problems down the road.

      Maybe Mythos with a lot of (competent) human hand-holding and pre-design can do it.

  • jcgrillo 7 hours ago

    > I expect soon we will see Rust forks with a pro-LLM policy

    I sure hope so. I expect the end result will disprove the following:

    > The Rust team will never be able to catch up to them

    The AI jackasses have been braying in this key for going on a few years now, and there hasn't been one single time any of this breathless noise has resulted in something meaningfully superior. It's time to put up or shut up. Enough bullshit talk. If you can vibeslop a better Rust (or whatever), JFDI and leave everyone behind.

  • ares623 8 hours ago

    Would love to see that happen, personally. All this power being held back by red tape. We need to unleash the beast.

    What do you think is stopping anyone from starting a fork right now? Is it a licensing issue?

    • greenavocado 7 hours ago

      Attention issue. They are desperate.

dryarzeg 7 hours ago

> This policy is intended to live in Forge as a living document, not as a dead RFC.

Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...

  • saghm 6 hours ago

    I feel like I saw phrasing like this pretty often even before LLMs were a thing