Context
Structured prompt frameworks such as LangGPT define behavioural state in advance using modules like Goal, Constraint, Workflow, Style, and Output Format (Zhou et al., 2024).
https://doi.org/10.48550/arXiv.2402.16929
In stable industrial or commercial contexts, this makes sense. The same behavioural constraints can stay active across all responses to guarantee consistent alignment with business expectations.
In general-purpose conversational use, however, query types and audience expectations vary widely during a single interaction. Maintaining all behavioural constraints for each response can cause unnecessary overhead where most interactions do not need structured output or publication-ready tone. This can also lead to user frustration when a blanket response context becomes inappropriate for follow-up interaction, leading to users retraining on-the-fly for each type of custom response.
These overheads can be reduced with improved rule contextualisation. Better alignment with user expectations from prompt triggers can result in fewer follow-up corrections to LLM responses. Additionally, a minimal set of parameters will be considered in response for general contexts.
The Proposed Delta
This approach retains a LangGPT-style configuration for behavioural definition, and adds a lightweight context-switching layer to govern behavioural activation at query time.
Switching Example:
MODE - If "as a document": mode=DOC - If "publish"|"for an article"|"for readers": mode=PUBLISH - Else: mode=CHAT
Associated rule sets:
CHAT - Shortest complete answer - Minimal formatting DOC - Use headings - Apply logical structure - Include APA references PUBLISH - Use STARL structure by default - Mostly prose - Lists only for comparison or steps - 2–4 sentence paragraphs
A LangGPT-type configuration defines the available behavioural state within the prompt. These act as query specific local parameters, which get laid on top of non-contextual parameters applied to every query. The proposed switch layer uses conversational context. This helps select which subset of that predefined behaviour should trigger for a given query.
This is conceptually similar to structured prompt design. It operates as an external activation mechanism. This differs from an embedded task configuration.
Personalisation preferences load for each response in a chat session. User queries can invoke different response forms. This depends on the detected trigger phrases. Behavioural capability is defined once, and activated conditionally during interaction.