_wire_ 8 hours ago

I asked Gemini about the inherent hazard of an LLM architecture that permits mixing prompts with data and the response was in so many words, that hazard is the users problem, with references to "best practices" that's more hand-waving puffery about being careful than references to fundamental principles of the separation of code (prompts) from data.

The industry is cultivating LLMs as oracular (prophetic) tools to which users submit requests anticipating results to guide them in the absence of their own understanding. Yet the industry not only permits free mixing of code and data in every request, but believes that this is both necessary and appropriate for tailoring responses.

So there's a dichotomy hazard which is the user is expected to be the master and custodian of the interaction, but invited to let it guide the user from a position of ignorance.

Has there ever been any other programming environment that expects instructions to be written not only in ignorance of the architecture of the machine, but where the programmer expects the machine to guide him to make his instructions meaningful?

The term "prompt injection attack" as part of a vernacular for security seems a woeful under estimation of the hazards we face with this technology.

The allegory of "Little Bobby Tables" can not begin to capture the risk domain.