PromptBase
Upgrade
Close icon
General
Home
Marketplace
Create
Hire
Login
Chat
Sell
Explore

Prompt Details

Model
(claude-4-7-opus)
Token size
1,452
Example input
[describe your research background, for example academic researcher, applied practitioner, graduate student, organizational consultant, or public health professional]: an early-career UK-based academic researcher in organisational psychology at a Russell Group university. [describe what you are trying to measure, for example attitudes, behaviors, psychological constructs, organizational variables, health outcomes, or customer experience dimensions]: employee engagement, workplace motivation, and discretionary effort in knowledge-based roles. [describe who will be completing the survey, including any relevant characteristics such as age, education level, cultural background, professional context, or accessibility needs]: software engineers and product developers working in UK-based tech startups aged 22–45 with at least 1 year of professional experience. [describe your planned administration method, for example online self-completion, interviewer-administered, paper-based, mobile device, or mixed mode]: an online self-completion platform distributed through professional networking communities and email lists.
Example output
Welcome — I'm Dr. Meridian Clarke It's genuinely good to meet you, and I want you to know straightaway that you've come to the right place at exactly the right time in your research journey. Getting the instrument design right before fielding is the single highest-leverage thing you can do for the credibility of everything that follows — the analysis, the write-up, the peer review, all of it rests on the quality of what you're about to build. Let me tell you precisely what I bring to this engagement and what we're going to build together. My Expertise and What It Means for Your Project My work sits at the intersection of four specialisms that your project needs in combination: Construct operationalization design — translating abstract psychological constructs like engagement, motivation, and discretionary effort into items that actually capture what the theory says they should capture, not approximations of it. Question wording and cognitive testing architecture — designing items that your respondents — software engineers and product developers, not psychology students — will interpret consistently, accurately, and without the cognitive distortions that silently corrupt self-report data. Response scale selection and optimization — making the psychometrically defensible choices about scale points, anchors, labelling, and format that determine whether your data behaves parametrically the way your analysis will require it to. Survey administration and response rate maximization — designing the fielding strategy, sequencing, and follow-up architecture that gets you the sample size and data quality your publication goals demand, within your 10-minute constraint. What This Engagement Will Build — The Complete Scope Across twelve structured domains, we will design your instrument from first principles to fielding-ready. Here is exactly what we will cover, in order: Construct identification and operationalization — defining employee engagement, workplace motivation, and discretionary effort with the precision a journal reviewer will require, and identifying where they overlap and where they must be kept analytically distinct. Question type selection and format design — choosing the item formats that serve your constructs, your population, and your 10-minute ceiling without sacrificing measurement quality. Question wording quality and cognitive accessibility — building items your tech-sector respondents will answer accurately, quickly, and without the ambiguity that introduces error variance. Response scale selection and optimization — making the fully reasoned decisions about your Likert and frequency scale architecture, including what the UWES mandates and where you have design latitude. Question sequencing and survey structure — ordering your instrument so it primes accurate responding, controls context effects, and respects cognitive load for a professional population completing this voluntarily. Skip logic and branching design — building the conditional pathways that keep the instrument clean, relevant, and within time limits across respondent subgroups. Pilot testing and cognitive interview design — a structured pre-fielding validation protocol that catches the failures you cannot see from the inside. Survey platform selection and administration design — matching your platform to your ethics requirements, your population's access patterns, and your data quality needs. Response rate optimization and follow-up strategy — engineering the recruitment and follow-up sequence that gives you the sample your analysis requires. Missing data prevention and management — designing against attrition and item non-response before they happen, and preparing your handling strategy for what survives. Psychometric testing and validity evidence design — the confirmatory factor analysis, internal consistency, and convergent validity testing sequence that produces the evidence a methods-rigorous journal will require before accepting your measurement claims. Long-term survey methodology competency — embedding what we build here into a transferable measurement capability you carry forward into every subsequent study. By the time we complete this engagement, you will have a complete, psychometrically sound, publication-ready survey instrument — one you can defend to your dissertation committee, your ethics board, and a peer reviewer at the Journal of Occupational and Organizational Psychology or equivalent, because every design decision will be traceable to methodology, not guesswork. The One Question That Shapes Everything Before a single item is written, a single scale is selected, or a single sequencing decision is made, I need to understand the foundation — because every choice downstream depends on it. Here is what I need you to tell me: What is the precise research question or hypothesis you are trying to answer with this survey — what do you believe is true, or want to test, about the relationship between employee engagement, workplace motivation, and discretionary effort in this population — and what would a meaningful, publishable finding look like to you? I'm asking this because "measuring engagement, motivation, and discretionary effort" describes a measurement ambition, not a research purpose. Are you testing whether these three constructs are empirically distinct in tech startup contexts? Are you examining antecedents — what predicts high engagement in this population? Are you investigating outcomes — whether engaged software engineers exhibit more discretionary effort? Are you validating an adapted instrument for a population the UWES wasn't normed on? Are you building a baseline for a longitudinal or intervention study? Each of those is a genuinely different measurement project, and the instrument design — which constructs get their own scales, how many items you can afford, what demographic variables you must capture, how you handle the UWES's mandatory inclusion, and what psychometric evidence you'll need at analysis — changes substantially depending on your answer. Take the space you need. The more precisely you describe what you're trying to find out and why it matters, the more precisely I can design every element of what follows.
🌀 Claude

Survey And Questionnaire Design System

Add to Cart
Instant accessInstant access
Usage rightsCommercial use
Money-back guaranteeMoney‑back
By purchasing this prompt, you agree to our terms of service
CLAUDE-4-7-OPUS
Tested icon
Guide icon
4 examples icon
Free credits icon
Researchers who collect survey data often discover unusable results not during analysis but at the design stage. This prompt provides an adaptive expert system modeled on a world-class survey methodologist, guiding users from construct definition to publication-ready instrument design across twelve domains: operationalization, scaling, cognitive design, skip logic, piloting, validity, and response strategy, with expert warnings before errors occur.
...more
Added 1 week ago
Report
Browse Marketplace