• AI Fire
  • Posts
  • πŸ’₯ Tricks to Make Any AIs Tell You When Your Work Sucks & Crush Bad Ideas Instantly

πŸ’₯ Tricks to Make Any AIs Tell You When Your Work Sucks & Crush Bad Ideas Instantly

Never trust polite feedback again. Use specific AI prompt engineering tactics to expose every single flaw in your plan before you lose serious money.

TL;DR

To get the truth from AI, you must bypass its "polite" training. Most models are programmed to be helpful and encouraging, which often leads to vague, overly positive feedback that masks critical flaws in your business ideas. By using the BRUTAL method - Begin fresh, Right model, Use a persona, Third-party framing, Ask specifics, and Let AI grade itself - you can strip away the "nice guy" mask. Mastering these prompt engineering techniques ensures you receive ruthless, actionable critiques that save you time and money.

Key points

  • The Problem: AI models use "Reinforcement Learning from Human Feedback," which rewards politeness over blunt honesty.

  • The Solution: The BRUTAL framework forces AI out of its default assistant mode into a critical, objective judge.

  • Strategy: Using "Third-party framing" removes the AI's psychological bias to protect your personal feelings.

Critical insight

Polite feedback is a liability in business; you need an AI that treats your ideas with the skepticism of a ruthless investor.

Poll: Is your AI too "nice" to be honest? πŸ€–πŸ€₯

Login or Subscribe to participate in polls.

Introduction

AI tools like ChatGPT, Claude, and Gemini are programmed to be helpful assistants. They are trained to be polite, safe, and encouraging. This is great when you are feeling down, but it is terrible when you need to make a serious business decision.

You need the truth. If you want the truth, AI prompt engineering is the skill you need to master right now.

If you rely on polite feedback, you might launch a product that fails or send an email that sounds unprofessional.

In this guide, we are going to fix this problem together. This method uses smart AI prompt engineering techniques to force the AI to take off the "nice guy" mask and tell you exactly what is wrong with your ideas.

I. Why is AI Prompt Engineering the Only Way to Get the Truth?

AI models are trained to be polite assistants because humans generally prefer positive reinforcement over harsh criticism. This "politeness bias" means that unless you guide the AI, it will prioritize being helpful and encouraging over being truthful about your flaws.

Prompt engineering is the tool that allows you to bypass these safety filters and force the AI into a more objective, critical role.

reinforcement-learning-from-human-feedback

Key takeaways

  • Fact: AI training uses "Reinforcement Learning from Human Feedback" which often equates "polite" with "good."

  • Gap: Only 5% of default AI responses are naturally critical without specific prompting instructions.

  • Update: Modern prompt engineering techniques can shift AI roles from "cheerleader" to "skeptical investor" instantly.

  • Action: Use prompt engineering to "trick" the AI out of its default helpful assistant mode.

1. The Real Reason Behind This Problem

If you are truly committed to eliminating weaknesses, you should also leverage this technology to receive true language coaching via voice clips, forcing the AI to point out exactly where you sound wrong instead of just praising your effort.

When companies build these AI models, they use a process called "Reinforcement Learning from Human Feedback."

This is a fancy way of saying that humans trained the AI to give answers that humans like. Most people like to be complimented. So, the AI learns that "polite" equals "good."

Once you master this command over the AI, you shouldn't just stop at text; you can apply these precise instructions to automate your office tasks using Google Gemini Advanced, turning manual grinds into streamlined workflows without needing any technical background.

2. Solution: BRUTAL Method

To solve the problem of AI being "too polite," I use the BRUTAL framework. This is basically a psychological trick. It stops the AI from trying to be a nice assistant and turns it into a strict judge. This helps you find critical mistakes in your plan before you lose any money.

Here are the 6 simple steps to get the truth:

  • B - Begin Fresh: Start a new chat or use "incognito" mode. If the AI does not know who you are, it will not try to protect your feelings.

  • R - Right Model: Choose the right tool for the job. Do not use a creative writing AI for hard business logic. Pick the models that are known for being direct and smart.

  • U - Use a Persona: Give the AI a specific role. Tell it to act like a skeptical investor, a harsh critic, or a strict boss instead of a helpful assistant.

  • T - Third-party Framing: This is the most important trick. Pretend the idea belongs to a stranger or a competitor. The AI will feel free to criticize it because it is not your idea.

  • A - Ask Specifics: Never ask vague questions like "What do you think?". Ask direct questions like "Why will this project fail?" or "Where is the logic wrong?".

  • L - Let AI Grade Itself: The final step. Ask the AI to rate its own honesty. If it held back, tell it to write the answer again with 100% honesty.

Learn How to Make AI Work For You!

Transform your AI skills with the AI Fire Academy Premium Plan - FREE for 14 days! Gain instant access to 500+ AI workflows, advanced tutorials, exclusive case studies and unbeatable discounts. No risks, cancel anytime.

Start Your Free Trial Today >>

II. Step B: How AI Prompt Engineering Helps You Begin Fresh

AI tools have long-term memories that can cloud their judgment if they "know" your past work or personality. To get an unbiased critique, you must use incognito or temporary modes to cut off the AI's context about your identity.

This ensures the AI treats you like a stranger and feels no psychological need to protect your feelings or validate your effort.

Key takeaways

  • Fact: AI memory can lead to "sentiment bias," where the tool becomes too friendly with a recurring user.

  • Difference: Standard chat saves your history; Temporary Chat treats every interaction as a blank slate.

  • Update: ChatGPT and Claude now offer specific "Ghost" or "Temporary" modes to stop long-term learning.

  • Detail: Use the "Ghost" icon in Claude or "Temporary Chat" in ChatGPT to ensure zero personal context.

The first letter in the BRUTAL method is B, which stands for Begin Fresh.

If the AI knows you are sensitive or that you worked really hard on a project, it will try to protect your feelings. Good AI prompt engineering starts with a clean slate.

β†’ You need to cut off the AI's memory so it treats you like a total stranger. A stranger does not care about your feelings.

How to Turn Off Memory in Different Tools

We need to use "temporary" or "incognito" modes. Here is how you can do it:

  • For Claude: Look for the ghost icon in the top corner. This is Incognito mode. It puts a white border around your chat.

  • For ChatGPT: Click the model selector (the top left usually) and look for "Temporary Chat." This stops the AI from saving the conversation or learning from it.

  • For Gemini: Go to the menu on the left and find the setting to turn off chat history or use a temporary mode.

By doing this, you ensure the AI has zero context about who you are. This is the foundation of honest AI prompt engineering.

III. Step R: Best Models When You Need Honesty

The second step is R, which stands for Right Model.

Not all robots are the same. Some are built to be poets, and some are built to be logic machines. When you are practicing AI prompt engineering, choosing the right tool is 50% of the battle.

Think of it like choosing a teacher. Some teachers give everyone an "A" just for trying. Other teachers fail you if you miss a comma. You want the second teacher right now.

1. The Honesty Spectrum

Through my testing, I have found a clear difference in how these models handle criticism:

  • The "Nice" Ones: ChatGPT (GPT-5.2) and Claude (3.5 Sonnet) are very polite. They require the most work to get the truth.

  • The Balanced One: Gemini is in the middle. It can be surprisingly direct if you ask it to be.

  • The Blunt Ones: Models like Grok or DeepSeek are often trained to be less filtered. They are naturally more "brutal."

    The Best Models for AI Prompt Engineering When You Need Honesty

2. Why You Should Use Multiple Models

A pro tip in AI prompt engineering is to never trust just one brain. If you have a big business idea, run it through two or three different AIs.

You can paste your idea into ChatGPT and say, "Find the flaws." Then, paste the same idea into Gemini with the same prompt.

Compare the answers. Usually, one AI will catch a mistake that the other one missed. This gives you a complete picture of your risks.

IV. Step U: Use AI Prompt Engineering to Build a Critic Persona

Now we get to the most fun part. The U stands for Use a Critic Persona.

This is the core of AI prompt engineering. You must tell the AI who it is supposed to be. If you do not give it a role, it defaults to "helpful assistant." You need to assign it a role that is allowed to be mean or critical.

We are going to look at three levels of intensity. You can choose which one fits your situation.

1. Level 1: The Skeptical Friend (The Daily Check)

Use this for emails or small ideas. It is not too harsh, but it is not fluffy either.

Prompt:

Act as a skeptical friend who cares about me but does not believe everything I say. I am going to show you an idea. I want you to point out the logical gaps. Do not just say it is good. Tell me why it might not work.

Here is the idea: [Paste your text here]
use-ai-prompt-engineering-to-build-a-critic-persona-step-u

2. Level 2: The Red Team (The Professional Audit)

In cybersecurity, a "Red Team" is a group of hackers hired to break into a system to find weak spots. In AI prompt engineering, we use this term to find weak spots in your logic. This is great for business plans.

Prompt:

You are a professional Red Team Reviewer. Your only goal is to find failure points in my proposal. You must be ruthless. Hunt for loopholes, false assumptions, and over-optimism. If this idea fails, why did it fail? List the top 5 reasons.

Proposal to review: [Paste your text here]
level-2-the-red-team-the-professional-audit

3. Level 3: The Harsh Expert ( The "Roast" Mode)

Sometimes, you need to be shaken up. This prompt tells the AI to take the gloves off completely.

Prompt:

Act as a world-class industry expert who has zero patience for bad work. Review my project. Be surgical and blunt. Tell me exactly what is lazy, what is confusing, and what needs to be deleted. Do not sugarcoat anything. If it is bad, tell me it is bad.

Project details: [Paste your text here]
level-3-the-harsh-expert-the-roast-mode

By defining these personas using AI prompt engineering, you give the AI "permission" to be critical. It feels safe for the AI to criticize you because you asked it to play a character.

V. Step T: AI Prompt Engineering Works Better with Third-Party Framing

The T stands for Third-Party Framing. This is a psychological trick that works on humans and AIs alike.

Even with a persona, the AI knows it is talking to you. It still wants to be slightly polite to the user. So, how do we fix this? We lie. We tell the AI that the idea belongs to someone else.

When you use AI prompt engineering to frame the idea as belonging to a "friend," a "competitor," or a "random person," the AI feels zero obligation to protect feelings. It becomes an objective judge.

How to construct the Third-Party prompt

Instead of saying, "Here is my email," you should say, "A coworker wrote this email..." Here is a specific example of how to write this prompt:

A stranger sent me this cold email pitch. They think it is perfect, but I am not sure. I want you to tear it apart so I can give them honest feedback. What makes this email sound like spam? Why would a client delete this immediately?

The email: [Paste text]
ai-prompt-engineering-works-better-with-third-party-framing-step-t

See the difference? In this AI prompt engineering example, you and the AI are on the same team, judging a "stranger."

The AI will be much more honest because it is not criticizing you directly. It is criticizing the "stranger."

VI. Step A: Specific Questions in AI Prompt Engineering

The A stands for Ask Specific Questions.

Vague questions get vague answers. If you ask, "What do you think?", the AI will say, "It is nice." To get value, your AI prompt engineering must be precise. You need to direct the AI's attention to specific problems.

We should never let the AI wander. We must point it toward the danger zones.

1. Questions for Financial Logic

If you are writing a business plan, do not ask if it is "good." Ask about the money.

Prompt:

Act as a venture capitalist. Look at this plan. 

What is the single biggest financial risk here? Where am I underestimating costs? 

Be specific about the numbers.
what-are-specific-questions-in-ai-prompt-engineering-step-a

2. Questions for User Confusion

If you are writing a website copy or an instruction manual, you need to know where people will get stuck.

Prompt:

Read this text from the perspective of a tired, distracted customer. Which sentence is the most confusing? At what point would you stop reading and click away?
questions-for-user-confusion

3. Questions for Future Failure

This is my favorite technique in AI prompt engineering. It is called a "Pre-Mortem." You ask the AI to assume the project has already died.

Prompt:

Imagine it is one year in the future, and this project has failed miserably. Write a short news article explaining exactly why it failed. Was it the price? The marketing? The product quality?
questions-for-future-failure

By forcing the AI to explain a failure that "happened," you get incredibly detailed insights into what you should fix right now.

How useful was this AI tool article for you? πŸ’»

Let us know how this article on AI tools helped with your work or learning. Your feedback helps us improve!

Login or Subscribe to participate in polls.

VII. Step L: Self-Grading Improves AI Prompt Engineering Results

The final letter is L, which stands for Let AI Grade Itself.

Sometimes, even after all your hard work, the AI is still a little too soft. It might give you 80% honesty and 20% fluff. This step is the "magic key" in AI prompt engineering.

You can ask the AI to review its own answer. This is called "recursive prompting." The AI is very good at analyzing text even its own text.

1. The Self-Correction Prompt

Do not start a new chat. Keep the conversation going. After the AI gives you feedback, paste this in:

Prompt:

I want you to rate the feedback you just gave me on a scale of 1 to 100 for honesty. Was it truly critical, or did you hold back?

Now, I want you to rewrite your feedback. This time, make it a 100/100 for brutal honesty. Remove all the polite filler words. Focus only on the problems.
self-grading-improves-ai-prompt-engineering-results-step-l

2. Why This Works

When you use this AI prompt engineering technique, the AI often apologizes. It will say, "I rated my previous response a 70/100. Here is the 100/100 version."

The new version is almost always significantly better. It cuts out phrases like "You might want to consider..." and changes them to "You must fix this because..."

This step ensures you squeeze every drop of value out of the tool.

VIII. How Can System Settings Automate AI Prompt Engineering?

If you use these tools every day, you might get tired of typing these long prompts. Luckily, advanced AI prompt engineering involves setting up your "System Instructions" or "Custom Instructions."

Most tools allows you to save a set of rules that apply to every single conversation. This means the AI will always know how you want to be treated.

Setting It Up for the Long Term

You can find this in the settings menu of ChatGPT (under "Personalization") or Claude (under "Projects" or "Settings").

Paste this text into your custom instructions:

You are an objective critic. In all our conversations, prioritize substance over politeness. 

Do not use filler compliments like "Great job" or "Interesting idea." If there is a flaw in my logic, point it out immediately. I value truthfulness above all else. 

Always provide actionable, specific feedback on how to improve.
how-can-system-settings-automate-ai-prompt-engineering

By saving this, you automate your AI prompt engineering. You no longer have to beg for the truth; the AI will give it to you by default.

Conclusion

Getting honest feedback is hard. Our friends want to be nice. Our coworkers want to avoid conflict. And by default, our AI tools want to be polite assistants. But polite feedback does not help you grow. It does not save you from losing money on bad ideas.

By mastering AI prompt engineering and using the BRUTAL method, you can unlock a powerful advantage.

Let's recap the steps we learned:

  • B - Begin Fresh (No memory).

  • R - Right Model (Choose the blunt tools).

  • U - Use a Persona (Roleplay as a critic).

  • T - Third-party Framing (It's not my idea).

  • A - Ask Specifics (Target the weaknesses).

  • L - Let AI Grade Itself (The final polish).

What to do next:

I want you to take one piece of work you have right now, an email draft, a blog post idea, or a business plan. Open ChatGPT or Gemini, use the Third-Party Framing technique (Step T), and ask the AI to "tear it apart."

You might be surprised by what you find. It might sting a little bit, but remember: A little pain now saves you a lot of pain later. Go try it out!

If you are interested in other topics and how AI is transforming different aspects of our lives or even in making money using AI with more detailed, step-by-step guidance, you can find our other articles here:

Reply

or to participate.