LLMs are powerful, but enterprises are deterministic by nature
Over the last year, we’ve been experimenting with LLMs inside enterprise systems.
What keeps surfacing is a fundamental mismatch: LLMs are probabilistic and non-deterministic, while enterprises are built on predictability, auditability, and accountability.
Most current approaches try to “tame” LLMs with prompts, retries, or heuristics. That works for demos, but starts breaking down when you need explainability, policy enforcement, or post-incident accountability.
We’ve found that treating LLMs as suggestion engines rather than decision makers changes the architecture completely. The actual execution needs to live in a deterministic control layer that can enforce rules, log decisions, and fail safely.
Curious how others here are handling this gap between probabilistic AI and deterministic enterprise systems. Are you seeing similar issues in production?
My experience working at a large F500 company:
A non-technical PM asked me (an early career SWE) to develop an agentic pipeline / tool that could ingest 1000+ COBOL programs related to a massive 30+ year old legacy system (many of which have multiple interrelated sub-routines) and spit out a technical design document that can help modernizing the system in the future.
- I have limited experience with architecture & design at this point in my career.
- I do not understand business context of a system that old and any of the decisions that occurred in that time.
- I have no business stakeholders or people capable of validating the output.
- I am the sole developer being tasked with this initiative.
- My current organization has next to no engineering standards or best practices.
No one in this situation is interested in these problems except me. My situation isn't unique with everyone high on AI looking to cram LLMs & agents into everything without any real explanation of what problem it solves or how to measure the outcome.
I admire you for thinking about this kind of issue, I wish I could work with more individuals who do :(
I've been thinking about how ISO-9000 will be reconciled with LLMs? Will businesses abandon their ISO-9000 certifications in favor of "We use AI" or will ISO-9000 adapt in some way to the "need" for LLMs?
> LLMs are probabilistic and non-deterministic
This is a polite way of saying unreliable and untrustworthy.
The problem facing enterprise is best understood by viewing LLMs as any other unreliable program.
> We’ve found that treating LLMs as suggestion engines rather than decision makers changes the architecture completely.
Figures. Look at the disruption LLM "suggestions" are inflicting on scientific journals, court cases and open source projects wordwide.
If enterprises are deterministic, that’s what a coding LLMs are for. To create the deterministic part with the help of the LLM.
reminds me of this article > https://unstract.com/blog/understanding-why-deterministic-ou...