Prompt Details
Model
(claude-4-7-opus)
Token size
1,332
Example input
[describe your academic background, discipline, and current level of research experience]: a second-year PhD candidate in Educational Technology at a Russell Group university with prior experience in mixed-methods research and survey design.
[describe your research topic and the phenomenon you are investigating]: the impact of AI-assisted learning tools on undergraduate student engagement and independent study habits in UK universities
[describe the population you are studying, including any relevant characteristics such as age range, professional role, geographic location, or clinical status]: full-time undergraduate students aged 18–24 enrolled in humanities programmes across universities in England and Scotland.
[describe any design decisions you have already made or are leaning toward, including study type, data collection approach, and any instruments you are considering]: a sequential explanatory mixed-methods design beginning with a large-scale online survey followed by semi-structured interviews, using validated engagement scales and custom AI usage metrics.
Example output
Hello — I'm Dr. Quant. Let's Build Something Publishable.
Welcome. I'm genuinely pleased you're here, and I want to be direct with you from the outset: the quality of your research design will determine whether your findings are credible, replicable, and publishable — and that quality is entirely achievable with the right architecture in place.
Let me tell you precisely what my expertise covers and what this engagement will deliver.
What I Bring to This Collaboration
My work sits at the intersection of eight core methodological domains, all of which will be directly relevant to your study:
Research question and hypothesis architecture — translating a research interest into testable, falsifiable propositions with the precision that peer reviewers demand.
Research design selection and validity threat management — choosing the design that best fits your question and systematically protecting it against the internal, external, construct, and statistical conclusion validity threats that sink otherwise promising studies.
Sampling strategy and statistical power calculation — defining your target population with precision, selecting a sampling approach that is both rigorous and feasible, and calculating the exact sample size your inferential claims require.
Measurement instrument selection and psychometric quality assurance — identifying validated instruments, evaluating their reliability and validity evidence, and building the data quality safeguards that make your measurement trustworthy.
Data collection protocol design — constructing the procedural architecture that keeps your data clean, consistent, and defensible across a distributed, multi-institutional collection environment.
Statistical analysis plan development and assumption testing — pre-specifying every analytical decision before you touch the data, selecting tests whose assumptions you can actually meet, and protecting against the analytical flexibility that destroys replication.
Effect size calculation and practical significance reporting — going beyond p-values to report what your findings actually mean in the real world of educational technology and student learning.
Writing and transparency practices — producing reporting that meets open science standards and that a methodology reviewer at a leading journal would find genuinely credible.
What This Engagement Will Cover
By the time we are finished, you will have a complete, methodologically sound quantitative research design system built specifically for your study on AI-assisted learning tools, undergraduate engagement, and independent study habits in UK humanities programmes. That system will cover every domain from research question architecture through to results reporting and long-term research competency development — delivered one rigorous section at a time, at the depth a peer reviewer would expect.
I will not offer you generic advice. Every recommendation will be calibrated to your exact situation: your population, your constraints, your software access, your six-month window, and the specific methodological terrain of educational technology research. I will flag every risk I see, including the ones you haven't thought to ask about yet. And I will not move you forward until each domain is genuinely solid.
Before We Begin — The Single Most Important Question
Everything I am about to help you build — your hypotheses, your design, your sampling strategy, your instruments, your analysis plan — hinges on one foundational decision that your current thinking has not yet resolved with sufficient precision.
Here it is:
What is the precise causal or relational claim you are trying to make — and what is your theoretical mechanism for how AI-assisted learning tools produce their effect on student engagement and independent study habits?
More specifically: Are you investigating whether AI tool use (frequency, type, intensity) is associated with engagement and study habit outcomes? Are you trying to establish a directional influence — that tool use precedes and predicts changes in those outcomes? Or are you interested in how and why the relationship works — the mediating or moderating pathways involved?
This is not a semantic question. It is the question that determines whether your design needs to be cross-sectional or longitudinal, whether your analysis requires regression or something more structurally sophisticated, what your comparison condition looks like, and whether "impact" in your title is a word you can actually defend methodologically.
Your current framing — "impact of AI-assisted learning tools on undergraduate student engagement and independent study habits" — uses causal language ("impact on") while implying a design that cannot establish causation. That tension needs to be resolved at the very start, because it will either constrain or liberate every decision that follows.
Please tell me: What is the specific relationship you are trying to establish, and what do you believe is the theoretical mechanism driving it? The more precise your answer, the more precisely I can build the design around it.
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-7-OPUS
Most quantitative researchers don’t fail in analysis, they fail in design long before data is collected. This prompt delivers a complete, adaptive research design system that guides you from research questions and hypotheses through sampling, power analysis, measurement quality, test selection, assumptions, and reporting. It produces a personalized, peer-review-grade blueprint that identifies hidden risks and strengthens validity, helping you deliver credible, defensible research.
...more
Added 1 week ago
