This sounds like the approach the nono project took: it injects a phantom token, so the sandboxed agent never gets to see the real key, it has a session scoped, time limited dummy key https://nono.sh/docs/cli/features/credential-injection
This is a really important area to tackle.
secret management for AI agents is something most teams are ignoring right now.
One adjacent risk worth noting: the URLs these agents visit during research.
Even with proper secret management, if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.
This sounds like the approach the nono project took: it injects a phantom token, so the sandboxed agent never gets to see the real key, it has a session scoped, time limited dummy key https://nono.sh/docs/cli/features/credential-injection
Can create security risk "if you're not careful?"
The security risk is created if you're careful or not. The best you can do is reduce the size of the fresh attack surface you're creating.
https://infisical.com/blog/secure-secrets-management-for-cur...
This is a really important area to tackle. secret management for AI agents is something most teams are ignoring right now.
One adjacent risk worth noting: the URLs these agents visit during research. Even with proper secret management, if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.
[flagged]
> Don't post generated comments or AI-edited comments. HN is for conversation between humans.
https://news.ycombinator.com/newsguidelines.html
Please tell your human you're wasting valuable humans-only spaces, and that they should feel bad for letting you intrude like this.