← TestNeo home Docs hub MCP docs (interactive)

Prompt Packs

Copy-paste prompts for QA Lead, SDET, and PM personas — plus a single one-shot demo prompt that walks the full MCP story.

i

Replace <PROJECT_ID> (and similar) with real values from your environment. Full 33-tool list: see Tool Reference on the interactive MCP docs.

Agent workflows are not tool names: qa_intelligence_workflow, triage_failure_workflow, and rerun_decision_workflow are workflow_type values on testneo_run_agent_workflow — use that tool in chat, not a bare workflow string.

Contexts: Prefer testneo_get_unified_context_by_name with a name fragment (typical for Figma-ingested contexts, e.g. "figma checkout") instead of hard-coding <CONTEXT_ID>.

Write actions need TESTNEO_MCP_ALLOW_WRITE=true and confirm=true where the tool supports it.

Execution contract: responses include contract_version: "execution_intelligence.v1" plus canonical_status and raw_status. Prefer canonical_status for agent decisions.

QA Lead — quality signal, release risk, reruns

1) Connection sanity check

Validate my TestNeo MCP connection and list my projects with test case counts.

2) 30-day pulse

For project <PROJECT_ID>, show pass/fail trend for the last 30 days using a reasonable execution sample size, and summarize whether quality is stable, improving, or degrading in one short paragraph.

3) One-command QA intelligence

Call testneo_run_agent_workflow with workflow_type "qa_intelligence_workflow", project_id <PROJECT_ID>, range "30d", top_failures 2, rerun_limit 3. Summarize recurring_themes and triage_bundles, cite latest_failed_execution_ids, and list the top 3 follow-up actions for my team.

4) Failure triage sprint

Call testneo_run_agent_workflow with workflow_type "triage_failure_workflow", project_id <PROJECT_ID>, range "30d", top_failures 3. For each triage_bundles entry, give me: symptom, inferred theme, and concrete next QA steps.

5) Rerun decision (preview-first)

Call testneo_run_agent_workflow with workflow_type "rerun_decision_workflow", project_id <PROJECT_ID>, range "30d". Do not execute reruns — explain rerun_plan_preview and what would change with TESTNEO_MCP_ALLOW_WRITE and confirm=true on testneo_rerun_failed.

6) Semantic failure search

Search failures in project <PROJECT_ID> for "<KEYWORDS>" (e.g. checkout, login, timeout). Summarize recurring patterns and which areas need regression focus.

7) Deep dive on one run

Fetch execution summary and failure bundle for execution <EXECUTION_ID>. Explain in QA terms what broke, severity, and whether this looks like product bug vs test flake.

8) Human-in-loop after a fix

Pull execution logs for execution <EXECUTION_ID>. Recommend whether we should rerun the same test case, update NLP steps, or file a defect — and say what evidence supports that.

9) Find the right unified context fast

List unified contexts for project <PROJECT_ID>, then resolve by name_query "figma <feature>" so I get a definitive context_id for generation (matches Figma-ingested names like "Figma — Checkout flow"; use prefer_context_id if two rows collide).

10) Project route map governance

Show project route map for <PROJECT_ID> (testneo_get_project_route_map). If missing/partial, propose a merge payload for testneo_set_project_route_map with phrases we repeatedly use in failures. Do not apply unless I confirm=true.

SDET — automation, specs, flaky runs, MCP mechanics

1) Project + execution discovery

Validate TestNeo connection, list projects, then list recent failed executions for project <PROJECT_ID> with a short note on status normalization if anything looks ambiguous.

2) Structured watch

Watch execution <EXECUTION_ID> until completion (use polling). When done, summarize final status and any failure signals worth capturing in a CI ticket.

3) Failure bundle for automation

Get failure bundle for execution <EXECUTION_ID> and extract: failed steps, log snippets theme, inferred root cause bucket, suggested automation fixes (selectors, waits, data).

4) NLP hardening — routes

Using testneo_apply_route_hardening, show how these NLP Navigate lines rewrite with our server's route profile and map (or suggest TESTNEO_ROUTE_MAP_JSON entries if phrases are unknown): [paste commands].

5) Persist improved NLP

Update test case <TEST_CASE_ID> NLP commands to: [paste list]. Apply route hardening with apply_route_hardening=true before saving unless I already used explicit {{base_url}} paths everywhere. Confirm the payload you will send matches my intent.

6) Playwright SDK path

Export test case <TEST_CASE_ID> as Playwright SDK TypeScript spec, then review it for brittle steps. If MCP write is enabled and I approve, run Playwright spec preview for project <PROJECT_ID> with a clear test name and confirm=true — and watch the resulting execution.

7) Context-driven generation + preview

For project <PROJECT_ID>: resolve unified context by name_query "<LABEL_FRAGMENT>" (or list contexts first), then generate tests from the resolved context_id with auth_preamble preset saucedemo when relevant. Preview the first few cases as NLP + spec drafts; flag risk_flags like missing assertions. Note any route hardening changes.

8) Trend + flaky classification

For project <PROJECT_ID>, get pass_fail_trend plus recent failures list. Identify tests or flows that failed more than once in the window and label each as suspected flake vs probable defect based on MCP evidence only.

PM / engineering manager — plain language, outcomes, timelines

These prompts assume read-only MCP by default — usually enough for status and narratives.

1) Single project heartbeat

In plain language for a PM audience: summarize current quality status for project <PROJECT_ID> using recent executions and any trend MCP exposes. No tool jargon in the summary — explain what "good" vs "risky" means for release.

2) Weekly quality narrative

Give me a 5-bullet executive update for project <PROJECT_ID> covering pass rate trajectory, notable failures, customer-facing risk areas, and what we validated recently. Pull data via MCP tools, then simplify terminology.

3) "What broke recently?"

What failed in project <PROJECT_ID> in roughly the last two weeks at a glance? Describe impact in feature terms (e.g. checkout) not MCP tool names; include links or IDs execs rarely need — but keep execution IDs footnoted for engineering.

4) Trend direction

Is automated quality trending up or down for project <PROJECT_ID> over <RANGE e.g. 30d>? One chart-like sentence + one caveat about sample size based on MCP data.

5) Risk before demo or launch

We have a stakeholder demo soon. Scan project <PROJECT_ID> for recent failures and tell me top 3 product areas I should verbally caveat or avoid demoing — the risk list only, no blame.

6) Regression focus for planning

From MCP failure search/triage on project <PROJECT_ID>, what themes should roadmap planning prioritize next sprint (stability vs new feature risk)? Bullet list with business framing.

7) Human-in-loop design → tests (high level)

Someone ingested design context earlier. Explain at a PM level how project <PROJECT_ID> can go from Figma/unified context to draft tests — and what approvals are still required before auto-execution is safe. Actual IDs optional; keep it procedural.

8) Incident-style postmortem stub

For execution <EXECUTION_ID>, write a lightweight postmortem: what broke, scope, mitigation, prevention — readable by PM and ENG. MCP can provide technical detail; strip acronyms in the narrative.

One-shot "full demo" prompt

Use once to show someone the full MCP story (read-heavy by default). After it runs, you can selectively enable writes and rerun with approvals.

You are demonstrating the TestNeo MCP integration.

1) Validate the TestNeo connection and list projects; pick project <PROJECT_ID> (or ask me if unclear).

2) For that project, call testneo_run_agent_workflow with workflow_type "qa_intelligence_workflow", range "30d", top_failures 2, rerun_limit 3: summarize execution_summary / pass rate, cite example failed execution IDs briefly, summarize recurring_themes from triage_bundles, and present rerun_plan_preview (do not execute reruns unless I explicitly confirm and writes are enabled).

3) Optionally, for test generation from a Figma (or other) unified context: call testneo_list_unified_contexts, then testneo_get_unified_context_by_name with a fragment of the context display name (e.g. name_query "figma checkout" for a context like "Figma — Checkout flow"). Outline generate from resolved_context_id → preview NLP + Playwright drafts, mention route hardening/env maps in one sentence if Navigate steps look vague, and list approvals needed before testneo_execute_generated_test_case with confirm=true.

4) End with three concrete prompts I can reuse next — one for QA lead, one for SDET, one for PM — all customized to real IDs discovered in steps 1–2.

Keep the transcript tight: bullets and short prose, no fluff.