Back to Feed

Harnessing Prompt Rules for Secure Code Generation

Yossi Pik

-

June 11, 2025

June 11, 2025

Introduction: The Quiet Layer Behind AI Prompts

As AI assistants become a common part of the development process, there is an important layer working in the background - Prompt Rules (aka Coding Assistant Rules or Rules Files).

These rules are not visible to most developers, but they play a key role in shaping the output of large language models. They help define how code is written, what patterns to avoid, and how to handle specific situations like input validation or secret management.

Instead of reacting after the code is written, rules provide an opportunity to influence code quality and security at the source - quietly guiding the LLM’s behavior before any code is generated.

What Are Prompt Rules?

Prompt rules are plain-text instructions that guide how AI coding tools generate code. They help influence the assistant’s behavior - what kind of code it produces, how it handles edge cases, and whether it aligns with secure coding practices.

These rules are typically defined at the project or user level and are automatically used when a developer prompts the assistant. Developers continue working as usual - the assistant simply responds with better-aligned suggestions, shaped by the rules in place.

Here’s how prompt rules work in practice:

  • AI-assisted or “vibe coding” IDEs (like Cursor, Copilot, or Windserf) allow developers to ask LLMs to generate or enhance code using natural-language prompts.
  • The quality and security of the generated code is highly dependent on how the prompt is interpreted. As shown in our research, naive prompts often result in insecure or incomplete code.
  • Prompt rules act as background instructions that enhance the prompt context. They can enforce secure defaults, guide code style, or introduce internal standards without changing the developer’s workflow.
  • Rules are typically created and maintained by architects, lead engineers, or security leads. They are scoped per project or user and automatically applied during prompt processing.
  • Example rules might include:
    • “Never log secrets, credentials, or tokens in plaintext.”
    • “Use HTTPS for all external API calls.”

This gives security and platform teams a powerful mechanism to influence the output of code generation - ensuring safer, cleaner code is written from the start, not retrofitted after scanning or review.

These rules are written in natural language and provide a scalable way to embed coding policies directly into the AI development experience - without disrupting how developers work.

Prompt Rules Covered in This Blog

As we saw, prompt rules directly impact the code that gets written. Depending on how they are designed, they can be used to implant insecure or even malicious behaviors, or to guide AI assistants toward secure-by-default patterns.

This blog focuses on the latter - how prompt rules can be harnessed to promote safer coding practices, reduce security risks, and support developers without requiring them to change how they work.

Why Rules Matter - A Follow-Up to Our Research

In our research on AI-generated code, we evaluated how different AI assistants respond to prompts involving the most common code-level vulnerabilities. We tested prompts across multiple models using scenarios based on the OWASP Top 10 and similar real-world patterns.

The results were clear - none of the models produced secure code by default. In each case, the code generated introduced risks such as hardcoded secrets, missing input validation, or unsafe database access. This demonstrates a critical issue: even well-known security flaws are not consistently avoided by default model behavior.

However, the second part of our research showed something just as important:

when we embedded security rules into the prompts, all models produced secure code by default.

This reinforces the case for prompt rules. Rather than expecting the model to know what secure code looks like in every context, we can steer it with targeted guidance. Prompt rules give us a way to apply that steering automatically - across all teams and projects - without needing developers to change how they work.

Examples of Prompt Rules in Practice

Rules files typically guide assistant behavior across different types of concerns. Here are three common categories, each with an example:

  • Good security practices
      	
        ## Avoid hardcoded secrets
        - Never generate code that includes hardcoded passwords, 
        tokens, or API keys. Use environment variables or 
        configuration files instead.
        
      
  • Secure by default
      	
         ## Path Traversal & Directory Access
         **Normalize and Check** 
         
         - Use `path.normalize()` and remove any `../` patterns. 
         Example:
         
         /* js */
         
         const safePath = 
         path.normalize(req.params.filename)
             .replace(/^(\.\.(\/|\\|$))+/g, '');
         
         const finalPath = path.join(UPLOADS_DIR, safePath);
         
         if (!finalPath.startsWith(UPLOADS_DIR)) {
           throw new Error('Path traversal attempt');
         }
        
         ```
        
      
  • Organization-specific preferences
      	
         ## Handle user input sanitization
    		
         - Use the internal function sanitizeInputSafe() for all
         user input sanitization. Example:
       
         /* js */
       
         // Instead of manually cleaning user input:
         // const input = userInput.replace(/[^\w\s]/gi, '');
         // Use the organization's approved sanitizer
       
         const cleanInput = sanitizeInputSafe(userInput);
       
         ```
        
      

These rules are typically written in natural language, stored in files such as rules and automatically loaded by the AI assistant. When clearly defined and consistently applied, they guide the assistant toward more secure and organization-aligned outputs.

Challenges in Applying Prompt Rules Effectively

While Prompt rules offer a promising way to influence code quality and security, there are several practical challenges to consider:

Keeping rules concise

AI tools operate with limited context windows. Long or overly detailed rules can reduce their effectiveness or even be ignored. Crafting short, clear, high-impact instructions is essential.

Adapting rules per language

A secure pattern in Python might not make sense in Go or JavaScript. Each programming language has its own conventions, libraries, and common pitfalls. Rules must be written with that context in mind.

Allowing customization

Organizations often need to reflect internal policies, preferred libraries, or coding practices. A one-size-fits-all ruleset is rarely enough. There should be room to adjust rules per team, per service, or per repository.

Version-controlled usage

Prompt rules are not static - they evolve with the codebase and security needs. Developers should use the most up-to-date version to ensure relevant guidance. Keeping rules in version control ensures they stay accurate and aligned with ongoing changes.

Driving adoption across teams

Even well-written rules are only useful if they are actually used. Teams must ensure that developers are working with AI tools configured to load the correct rules. This may involve changes to onboarding, tooling defaults, or IDE integration.

From Static to Dynamic: The Next Step for Rules

Most rules files today are static - written once and manually maintained. This is a good starting point, but static rules often fall behind in dynamic development environments.

First, models continue to change. Updates to underlying LLMs can shift behavior in subtle but important ways, making previously reliable rules less effective. At the same time, languages and frameworks evolve, introducing new defaults, conventions, and potential risks that static rules may not account for.

Just as important is the need for a feedback loop. When security issues are detected later in the CI/CD pipeline, they often reveal weaknesses in earlier assistant-generated code. By examining these issues and refining the rules accordingly, it becomes possible to continuously improve prompt effectiveness and reduce repetition of known mistakes.

Finally, rules should reflect the custom context of each organization. Generic guidance is not enough - effective rules need to reference internal practices, such as enforcing use of sanitizeInputSafe() or other in-house utilities. In many cases, this context can be learned automatically by analyzing the codebase, meaning it does not require manually editing the rules file for every project.

To stay relevant, rules files must adapt - to the behavior of the models, the evolution of the stack, and the realities of how secure code is validated in practice.

Conclusion: Shaping the Defaults

AI coding tools are becoming a standard part of how modern software is built. With that shift comes a new opportunity - to shape the defaults that guide developers as they write code.

Prompt rules give us a way to do that. They are lightweight, flexible, and work silently in the background to promote better practices. While there are real challenges in writing, maintaining, and adopting these rules, the potential impact is significant.

For the first time, we can help developers write more secure code without slowing them down, changing their workflow, or requiring them to become security experts. We just need to make sure the prompts are pointing in the right direction.