Survey Design
Survey methodology
The survey is now live. It follows a modular, multi-stakeholder design built on findings from 70+ in-depth interviews conducted across 48 pop-up cities between November 2025 and February 2026. Those interviews shaped the constructs measured, the failure modes tested, and the question wording.
Structure
The survey has two layers.
First, a role-specific base instrument (~7–10 min), completed once per respondent. Three versions exist, tailored by stakeholder role: attendees report on their experience, perceived value, and outcomes; organisers and operators report on operating model, governance maturity, budget structure, and operational constraints; partners, vendors, and sponsors report on relationship type, friction points, commercial value, and re-engagement conditions.
Second, a case-specific module (~5–6 min per case), repeatable for each pop-up city a respondent has experienced. This module contains a core battery — 18 Likert items across seven dimensions (membership fit, governance legitimacy, incentives and economics, safety and enforcement, coordination effectiveness, ops and logistics, economic health), three temporal trajectory items, outcome proxies, a failure-mode checklist, and post-event persistence indicators. It also includes role-conditional items that capture what only a given role can observe: incident volumes for organisers, involvement burden for partners, pricing and social dynamics for attendees.
Sampling and recruitment
This is a structured observational sample, not a representative one. Recruitment runs through community channels (Telegram, Discord, mailing lists), organiser-distributed links, and direct outreach. Secondary recruitment targets non-returners and early-leavers to reduce survivorship bias. Role-mix targets are approximately 60% attendees, 25% organisers, 15% partners — monitored and adjusted during fieldwork.
Identity linkage and deduplication
Respondents generate a reusable Participant ID using a deterministic rule (combining fragments of personal information they can reproduce consistently). This links base surveys with case modules and flags duplicates. Email is optional, collected solely for opt-in follow-up. Survey links carry URL parameters for distribution metadata, stored as hidden fields.
Privacy
Responses are analysed and reported in aggregate. Free-text fields are optional and minimised. No raw responses containing potentially identifying information are published. Data access is restricted to the research team.
Known limitations
Non-random sampling means findings describe the focal-case set, not the full population of pop-up experiments. Respondent counts will vary across cases; low-count cases are treated as exploratory. All survey data is perception-based and cannot substitute for direct observation or administrative records. Self-generated Participant IDs carry an estimated 10–15% linkage failure rate; unlinked responses are analysed at the item level but excluded from cross-instrument models.
About the author(s)
Denisa Lepădatu is an ex-longevity biotech researcher and venture capitalist. Currently building biotech programs and teams in pop-up cities from Vitalia and Infinita City to Frontier Tower and Edge City; Jimin Lee is a brand & product strategist and designer, founder of Thou Ārt. Jimin’s latest passion is developing operation strategies for pop-up cities and builder communities.


