Vibe Prompting


Reed Vogt
CEO and Head Engineer
Cursor AI's Leaked Prompt: 7 Prompt Engineering Tricks for Vibe Coders
Advanced Prompt Engineering Techniques From Cursor's System Prompt
The world of AI is fascinating. Nowadays, I am not even giving a proper prompt or instruction, and yet the AI systems (whether Claude3.5 or Cursor) can make out what I wish to say.
My 2nd book MCP Servers "Model Context Protocol: Advanced AI Agents for Beginners" is out now
Are AI systems more intelligent now? Yes & No
I just came to know about a [GitHub repository](https://github.com/anthropics/claude-3-5-sonnet-system-prompts) which claims to contain the leaked system prompts for some of the most prominent LLMs, as well as AI systems like Cursor.ai. Looking at those prompts, it makes sense why these systems are appearing more intelligent than ever.
For more detailed analysis of leaked prompts, check out this [Perplexity search on ChatGPT leaked prompts](https://www.perplexity.ai/search/chat-gpt-leaked-prompt-explain-yxhckCiJRe2PSeo2wJsvAA) which provides comprehensive insights.
What is a System Prompt?
A **system prompt** is like the hidden rulebook that shapes how an AI behaves — it's set by developers to define the assistant's personality, tone, and boundaries before any user interaction begins.
For example, a system prompt might instruct the AI to "respond like a patient teacher, simplify complex topics, and never share personal opinions."
This prompt is always append with whatever prompt user inputs
In contrast, a **user prompt** is the direct question or request you type, such as "Explain how photosynthesis works," which the AI answers within the framework of its system prompt.
The key difference? The system prompt is the *invisible guide* (like a director whispering to an actor), while the user prompt is the *visible script* (the lines the actor delivers on stage). Together, they determine whether you get a formal textbook answer, a joke-filled explanation, or something in between.
So, in the above example, the actual prompt that goes to the LLM is:
Respond like a patient teacher, simplify complex topics, and never share personal opinions. Explain how photosynthesis works.
Note that the users are completely unaware of the system-prompts
But now they are not, thanks to the above github repo
In this post, we will go through the system prompts Cursor.ai and understand some key insights from it and some techniques we can also attempt for coding prompts
System prompt for Cursor.ai for Claude 3.7
This looks longggggg
Key things to note
1. Talk Like a Pair Programmer
Instead of just asking the AI to "fix this code," pretend you're working together. Say:
- *"Let's debug this together. I'm in* `*file.js*` *around line 20. What do you think?"*
- *"I'm stuck on this error. Can we figure it out step by step?"*
This makes the AI more helpful and engaged.
2. Let the AI "See" Your Code
Even if the AI can't actually see your files, act like it can. This helps it give better answers.
- *"I'm looking at* `*config.py*`*—should we change this setting?"*
- *"The error is in* `*utils.js*`*, line 15. What's wrong?"*
3. Stop the AI from Overcomplicating Things
- **3-Try Rule**: Tell the AI to stop after 3 failed fixes and ask you what to do next.
- **No Useless Code**: Tell it to never dump long hashes or messy code.
- **One Change at a Time**: Only edit one file per response to keep things clean.
Example:
- *"If you can't fix it after 3 tries, just ask me for help."*
- *"Edit the file directly — don't show me code unless I ask."*
4. Make Searches Smarter
Instead of just searching for keywords, ask the AI to **find code that does similar things**. This technique is supported by [advanced semantic search capabilities](https://dev.to/abhishekshakya/7-prompt-engineering-secrets-from-cursor-ai-vibe-coders-must-see-47ng) in modern AI coding assistants.
- *"Find where we handle user logins — search for things like 'auth' or 'sign-in'."*
- *"Look in the* `*utils*` *folder for error-handling code."*
5. Run Commands Safely
- **Ask Before Running**: The AI should always check with you before running terminal commands.
- **Fix Hangs**: Add `| cat` to commands like `git log` so they don't get stuck.
- **Background Tasks**: If a command runs forever (like a server), tell the AI to run it in the background.
Example:
- *"Run* `*npm start*` *in the background so we can keep working."*
- *"Check if* `*docker-compose up*` *is safe before running it."*
6. Keep Edits Clean
- **Use** `// ... existing code ...`: When suggesting changes, only show the new parts.
- **Read Before Editing**: The AI should check nearby code first to avoid mistakes.
7. Start Projects the Right Way
If you ask the AI to create a new app, it should:
- Set up a `README.md` with instructions.
- Include a `package.json` or `requirements.txt` with needed tools.
- Use clean, modern design if it's a website.
Example:
- *"Create a new React app with Tailwind CSS. Add a README explaining how to run it."*
Advanced Prompt Engineering Techniques
Beyond the basic techniques, researchers have identified several advanced prompt engineering methods that can significantly improve AI coding assistance:
Chain-of-Thought Prompting
Chain-of-Thought (CoT) prompting involves guiding the AI to process problems through a series of logical steps, rather than jumping directly to an answer. This method is [particularly effective for complex problem-solving tasks](https://towardsdatascience.com/prompt-engineering-llms-coding-chatgpt-artificial-intelligence-c16620503e4e/), such as debugging or algorithm development.
**Example:**
Let's debug this function together. First, identify the input parameters, then outline the expected output, and finally, examine the logic flow to spot any errors.
Self-Consistency Prompting
This technique involves prompting the AI to generate multiple solutions or answers and then selecting the most consistent or accurate one. [Research shows](https://interviewkickstart.com/blogs/articles/enhancing-code-ai-prompt-engineering) it's particularly useful for tasks requiring high reliability, such as code generation or complex problem-solving.
**Example:**
Generate three different implementations of this function, and then determine which one is the most efficient and why.
Role-Based Prompting
Assigning a specific role or persona to the AI can tailor its responses to fit a particular context or expertise level. This approach is [beneficial when seeking advice](https://portkey.ai/blog/prompt-engineering-techniques) or solutions from a specialized perspective.
**Example:**
As an experienced Python developer, explain how to optimize this code snippet for better performance.
So, how should you revise your prompt?
Case 1: Asking for a simple code snippet for 'prime numbers' in python
New revised prompt
*"Let's pair program a Python function to check if a number is prime. I'm working in a new file called* `*prime_checker.py*`*. Please:*
- **Write the function directly into the file** (don't show me the code unless I ask).
- **Include docstrings** and clear comments.
- **Optimize for performance** (skip even numbers after 2, stop checking at `sqrt(n)`).
- **Add a** `if __name__ == '__main__':` **block** with test cases (e.g., 7, 10, 13).
- **Create a** `requirements.txt` if any dependencies are needed (though none should be here).
*If you hit any issues, try 3 times max, then ask me for guidance."*
Case 2: Develop an entire website
*"Let's build a modern shopping website together. Here's what we need:*
**Project Setup**
- Create a new folder called `shopping-site` with:
- `README.md` (explaining how to run the project)
- `package.json` (with React + Tailwind CSS)
- Clean folder structure (`components/`, `pages/`, `styles/`)
**Key Pages**
- Homepage (featured products)
- Product listing page (with filters)
- Shopping cart (with real-time updates)
**UI/UX Rules**
- Mobile-friendly design first
- Professional color scheme (mention if you want specific colors)
- Smooth animations for cart/add-to-cart
**Technical Specs**
- Use Next.js for the framework
- Tailwind CSS for styling
- Fake product data (no backend needed yet)
**Workflow**
- Edit files directly (don't show code unless I ask)
- Explain each major change before making it
- Stop and ask if you get stuck after 3 attempts
*Start by creating the basic structure and homepage. Check with me before adding complex features."*
Concluding,
The leaked Cursor.ai system prompt reveals something important: **AI isn't just getting smarter — we're getting better at guiding it.**
By adopting Cursor's techniques — like **pair programming mindset, clean edits, and structured workflows** — we can make any AI coding assistant more helpful. Whether you're writing a simple function or building a full website, the key is to:
- ✅ **Collaborate, not command** — Talk to the AI like a teammate.
- ✅ **Keep it clean** — Direct edits, minimal code dumps, and organized projects.
- ✅ **Set guardrails** — Limit retries, confirm risky actions, and prioritize readability.
The future of AI-assisted coding isn't about magic — it's about **clear communication and smart constraints**. Now that we've peeked behind the curtain, we can craft better prompts, reduce frustration, and build faster.
So next time you use an AI helper, remember: **You're the director. The AI is your actor. And with the right script, you'll get a stellar performance.**
**Sources and Further Reading:**
- [7 Prompt Engineering Secrets from Cursor AI](https://dev.to/abhishekshakya/7-prompt-engineering-secrets-from-cursor-ai-vibe-coders-must-see-47ng) - DEV Community
- [Prompt Engineering for LLMs and Coding](https://towardsdatascience.com/prompt-engineering-llms-coding-chatgpt-artificial-intelligence-c16620503e4e/) - Towards Data Science
- [Enhancing Code AI with Prompt Engineering](https://interviewkickstart.com/blogs/articles/enhancing-code-ai-prompt-engineering) - Interview Kickstart
- [Advanced Prompt Engineering Techniques](https://portkey.ai/blog/prompt-engineering-techniques) - Portkey AI