3 Comments
User's avatar
Poojitha Marreddy's avatar

Thank you for sharing the insight on uncontrollable ai and guardrails to put in the system before things go bad.

In the same manner based on real world system I learned hard way and now wrote a breakdown for AI safety and Security guide in this last article. Hope this also gives more insights to community

https://open.substack.com/pub/poojithamarreddy/p/building-secure-agentic-ai-a-product?r=3qhz95&utm_medium=ios&shareImageVariant=overlay

Neural Foundry's avatar

This is exactly what I've been saying! I worked on a support bot last year that kept escalating issues to managers without proper thresholds, and we ended up with VP-level staff handling password resets. The distinction between alignment and control really clicks for me because alignment is what we hope happens, control is what actually keeps things from going sideways. Most teams I see are still stuck at layer 1 thinking prompts will save them when they realy need structural guardrails.

Michael J. Goldrich's avatar

The point that the real danger is uncontrolled capability, not intelligence, reframes the whole AI safety conversation.

Control systems aren't optional. They're what separate deployment from chaos.