Education as Problem Solving Lab: Supporting Talent
Building top talent through 16 offerings that turn potential into progress—diagnosing, mentoring, shipping, and connecting real problem solvers.
Education that only rewards recall is a dead end—especially in a world where AI automates routine work and the hardest problems are open-ended, interdisciplinary, and political as much as they are technical. Our previous work distilled what the leading reports (UNESCO, OECD, WEF, Learning Reimagined, Ashoka) keep repeating: the mission of schooling must shift from transmitting content to building problem solvers. That means learners who can frame the real problem, model complex systems, reason under uncertainty, iterate with feedback, navigate constraints, integrate disciplines, collaborate, plan ahead, use tools for thought, and reflect on their own thinking. Imagine Lab is designed to operationalize that consensus—turning theory into a working system teenagers can actually use.
The logic behind the 16 offerings is simple: diagnose → calibrate → accelerate → ship → signal → connect. First, we surface each student’s strengths and stretch zones (Talent Genome & Pathway Map). Then we create precise calibration loops (360° Peer Feedback & Reputation Graph) and bring in targeted expertise exactly when it matters (Mentor Guild & Precision Coaching). We move from ideas to evidence through structured shipping pipelines (Project Incubator + Microgrants, National Hackathons & Grand Challenges). Finally, we make capability visible and portable (Portfolio, Badges & National Talent Passport) and connect learners to authentic demand (Partner Network & Opportunity Marketplace). Each offering is a lever in a larger machine that compounds learning into impact.
Why this matters now is not abstract. UNESCO calls for agency and navigational capacities; OECD’s Learning Compass centers anticipation-action-reflection; WEF emphasizes competency ecosystems; Learning Reimagined pushes for nonlinear, authentic projects; Ashoka insists on frame-shifting and ethical action. All of that collapses to one operational truth: real ability emerges from authentic problems, visible artifacts, precise feedback, and public accountability. The offerings are therefore built around shipped work, not worksheets; expertise at the right moment, not generic lectures; evidence and iteration, not one-shot grades.
The first cluster tackles the front end of talent development. The Talent Genome replaces guesswork with a high-signal diagnostic and a bespoke growth path. 360° Feedback turns social perception into a learning asset, teaching students to ask for, parse, and act on tough notes. The Mentor Guild compresses cycles by focusing scarce expert time on pivotal decisions. Together, these three offerings give each student clarity—what to build, how to improve, and who can help—before they waste months wandering.
The middle cluster drives execution under reality. The Project Incubator and National Hackathons provide time-boxed, constraint-rich environments where teenagers move from problem framing to prototype to evidence quickly and honestly. The AI & Coding Accelerator treats technology as a thinking amplifier, not a toy—pairing agents and code with evaluation and safety. Idea Commons and the Collaborative Studios turn creativity and teamwork into trainable disciplines, using rituals, roles, and conflict protocols that mirror how high-functioning teams actually work.
The final cluster makes learning legible to the world and connected to real demand. Research & Data Clinics and the Systems & Policy Lab convert curiosity into credible evidence and implementable interventions. Storytelling & Demo Days translate complexity into narratives stakeholders can act on. Internships & Residencies put students in situations where deliverables matter to someone else, accelerating maturity. Performance & Mindset Coaching builds the personal operating system that sustains effort without burnout. The Founder & Grants Fast-Track, Portfolio & Talent Passport, and the Opportunity Marketplace ensure that outcomes—pilots, grants, users, jobs—follow from capability, not connections.
What changes when a country runs this playbook? You get a national flywheel: selection and diagnostics feed targeted coaching; targeted coaching speeds shipping; shipping creates public artifacts; artifacts attract mentors and partners; mentors and partners provide harder, more valuable problems; harder problems produce better talent—and the loop tightens. It is the practical answer to what the experts have asked of us for years: stop admiring the problem and build the infrastructure that lets young people practice solving real ones. Imagine Lab is that infrastructure for Czechia—ethical, ambitious, and relentlessly focused on turning potential into progress.
Summary
1) Talent Genome & Pathway Map
We start with a hands-on diagnostic, not a test: short, authentic challenges across the core problem-solving skills (framing, modeling, information judgment, iteration, constraints, synthesis, collaboration, foresight, tool use, metacognition). Students submit artifacts, think aloud, and get quick interviews; teammates and mentors add structured feedback. Trained assessors double-score with clear rubrics, and we synthesize a “spikes & stretch” profile that names what’s already strong and what to train next. Then we co-design a 12-month Pathway Map—projects to ship, mentors to meet, tools to learn, weekly execution rituals to adopt. All artifacts go into a baseline portfolio, and fairness checks make sure the process works for students from any region or school.
2) 360° Peer Feedback & Reputation Graph
After every sprint, students get tight, rubric-based feedback from teammates, mentors, and themselves on visible behaviors (clarity of framing, quality of models, strength of evidence, iteration speed, team contribution, etc.). We train everyone to give specific, constructive notes and use prompts that force “what to do next,” not vague praise. An AI-assisted digest clusters themes (what’s improving, what’s stuck) and suggests one lever to pull next sprint. Over time this creates a living reputation graph—a timeline of strengths and growth areas that helps students calibrate how others experience their work. Public endorsements can be attached to artifacts; sensitive critiques remain private by default.
3) Mentor Guild & Precision Coaching
Our mentors are operators, researchers, and founders who coach at the right moment rather than lecturing. Students post clear “asks” (e.g., “pressure-test our model” or “help us decide a pivot”), and we match them on domain and working style. Short, high-leverage sessions happen at defined gates—Problem, Model, Prototype, Evidence, Story—so mentor time drives decisions. Mentors annotate artifacts (briefs, code, analyses, decks) and follow up within 48 hours; periodic “red-team” critiques probe assumptions and edge cases. Beyond advice, mentors unlock intros, datasets, pilot sites, and other doors that speed up real progress.
4) Project Incubator + Microgrants
This is a 6–8-week pipeline that moves a student mission from concept to a prototype or pilot. We kick off with scoping, run weekly standups, hold a midpoint critique, do user testing, and finish with a demo day. Students get microgrants for tools or data, compute credits, access to method templates, and light ethics/privacy guardrails. The method spine is simple: frame the problem, model the system, build a first version, test it with real criteria, review evidence, iterate, then decide—scale, park, or transfer. Everything lives in a clean repo with a short learning report so the work is reusable and visible.
5) National Hackathons & Grand Challenges
We host time-boxed challenges (48-hour sprints and 2-week runs) on real briefs from cities, NGOs, labs, and startups—climate adaptation, civic tech, health data, AI safety. Tracks include software, hardware, policy/data; each has ethics and safety checklists. Mentor stations help at key checkpoints (framing, prototyping, evaluation, demo). The judging rubric is about impact, rigor, reproducibility, usability, and a continuation plan—not just flash. After the event, teams get a clear on-ramp—mini-grants, repo cleanup, partner follow-ups, and 30/60-day sprints—so good ideas don’t die on Monday.
6) AI & Coding Accelerator (“Build with Copilots”)
Students learn to use AI as a thinking amplifier and code as a design language. We teach prompt design, retrieval workflows, critique loops, and responsible-use basics; then we move to notebooks (Python/JS), data cleaning and visualization, small APIs, and basic testing. Teams build useful automations/agents with a human-in-the-loop and write simple evaluation cards (factuality, robustness, failure modes). Every week, learners ship a tiny tool for a real user (school ops, a community task, a partner need) and keep a personal AI playbook of reusable prompts and pipelines.
7) Idea Commons & Collective Brainstorms
An always-on ideation hub where students co-create and stress-test ideas before investing months. Weekly rituals include five-framings (rewrite the problem five ways), assumption-busting, SCAMPER, analogy chains, and quick “pitch-back” sessions. Teams do problem-market fit checks—mini interviews, back-of-the-envelope models, and simple success/kill criteria. All ideas live in a tagged, versioned backlog with attribution norms. Transparent gates (green-light, park, kill) move the best concepts directly into the Incubator with mentors and microgrants.
8) Collaborative Problem-Solving Studios
These studios mimic how real teams work. Students rotate roles—facilitator, skeptic, modeler, integrator, storyteller—and commit to a team contract. They run standups, keep decision logs, hold retrospectives, and schedule “red-team” critiques. Stakeholder interviews and co-design sessions ensure real-world grounding. Shared artifacts—system maps, test plans, risk registers, evidence boards—make thinking visible. Simple conflict protocols and “steel-man” drills turn disagreements into better decisions rather than drama.
9) Research & Data Clinics
Clinics turn questions into credible evidence. Students refine a question, choose or justify a model (causal, descriptive, predictive), and plan data collection/curation. They work in reproducible notebooks with data dictionaries and provenance logs, and they run validity checks (sampling logic, missingness plans, confounds, sensitivity analyses). Light ethics/privacy guardrails protect participants and data. Peers try to reproduce results, red-team assumptions, and request revisions; finalized evidence briefs summarize findings, limits, and next tests.
10) Systems & Policy Lab
This lab converts projects into implementable interventions. Students map stakeholders and incentives, scan relevant rules (standards, procurement paths, data law), and build impact and cost models that include second-order effects. They design pilot protocols (scope, safeguards, measurement) and pursue MoUs or approvals with municipalities, NGOs, or host orgs. Clear governance and handoff playbooks keep pilots responsible and scalable—or sunset them cleanly if the evidence says no.
11) Design for Impact: Storytelling & Demo Days
If work isn’t understood, it won’t be adopted. Here students learn narrative architecture (problem → stakes → insight → solution → evidence → next step) and adapt it for different audiences (technical, policy, public). They craft honest visuals (clear charts, uncertainty bands, known failure modes) and rehearse reliable demos with fallbacks. Q&A sparring prepares them for tough objections. Each team leaves with a media kit—one-pager, deck, 90-second video, and links to repos or briefs—ready for partners and press.
12) Internship & Residency Exchange
Short, scoped placements with startups, labs, media, or agencies give students situated practice with real stakes. We match by capability and host need, write a tight scope doc (problem, constraints, deliverables, acceptance criteria), and assign dual mentors (host supervisor + Imagine Lab coach). Milestones include kickoff, midpoint review, final handover, and a reflective retro. Professional guardrails cover IP, privacy, safeguarding, and stipends or credit. Residencies end with a public artifact and a clear supervisor evaluation.
13) Performance & Mindset Coaching
Elite performance isn’t magic; it’s a personal operating system. Students set weekly goals, block time, and do end-of-week retros (wins, frictions, fixes). They build deep-work routines, clean up attention hygiene, and learn cognitive tools (bias spotters, premortems, checklists). We teach resilience and recovery—sleep, stress tools, sustainable effort cycles—and make feedback a muscle: how to ask, parse, and act on tough notes. The output is a written Personal Operating Manual and a steady cadence that gets hard things done without burnout.
14) Founder & Grants Fast-Track
For teens ready to found now, we provide just-in-time scaffolds. First comes proof: problem interviews, small smoke tests, basic traction. Then entity setup (fiscal sponsorship/NGO/s.r.o.), contributor/IP templates, and minimal bookkeeping. Students scope a v0.1 offer with a tiny SLA, then hit funding: microgrants, Czech/EU youth and innovation grants, sponsor outreach. A small mentor circle provides governance, risk logs, and ethical guardrails while students get their first users and first money.
15) Portfolio, Badges & National Talent Passport
Grades don’t show what you can build, so we maintain a living portfolio. The system auto-collects artifacts (repos, briefs, demos, notebooks); students add short reflections (goal, method, evidence, limits). Mentors and partners attach verifications to specific artifacts. A clear badge taxonomy maps demonstrated capabilities to the 12 problem-solving areas and tool fluencies. Everything rolls up into a Talent Passport—a concise, shareable profile for partners—under strong privacy controls students manage.
16) Partner Network & Opportunity Marketplace
A two-sided platform connects real problems with student squads. Partners post briefs using a clear template (context, constraints, data access, deliverables, timeline, review points). Matching uses the Talent Passport and staff oversight to form balanced teams. Delivery follows a simple spine—kickoff, midpoint review, evidence check, final handover—with decision logs and basic ethics/quality checks (reproducibility, privacy, accessibility). Small bounties, certificates, and public showcases close the loop and feed the next opportunity.
The Offerings
1) TALENT GENOME & PATHWAY MAP
Definition
A high-signal diagnostic + short immersion that surfaces each student’s problem-solving profile (across the 12 areas), then converts it into a personal 12-month growth path.
Key idea
Strengths-based, authentic performance beats grades. Use real challenges (not quizzes) to see how a student frames problems, models systems, handles uncertainty, iterates under constraints, synthesizes disciplines, collaborates, plans ahead, uses tools, and reflects — then map a bespoke pathway.
What it consists of (points)
Challenge battery (2–3 weeks): compact, authentic tasks aligned to the 12 areas (e.g., reframe a messy brief; build a causal model; run an evidence check; generate/test hypotheses; time-boxed build; cross-domain analogy; team sprint; futures wheel; metacog journal).
Mixed evidence streams: artifacts, think-alouds, peer feedback, mentor micro-interviews, and self-reflection; rubric-scored against clear descriptors (novice → advanced).
Calibration & fairness: double-scoring + moderation for reliability; bias checks across subgroups.
Strengths synthesis: “spikes & stretch” narrative with 1–2 growth hypotheses (e.g., “excellent framer; needs faster iteration under constraints”).
Pathway design session: select 1–2 tracks (e.g., systems & policy / AI & data / venture build / civic tech), target projects, mentors, and weekly execution rituals.
Core outputs (few, high-value)
Talent Genome (radar + narrative on the 12 areas).
12-month Pathway Map (goals, projects, mentors, milestones, rituals).
Baseline portfolio (diagnostic artifacts + reflection).
How we measure success
Reliability/validity: inter-rater ≥0.80; predictive correlation with sprint performance, shipping rate, and partner ratings at 3–6 months.
Capability delta: movement on 2–3 targeted rubric dimensions after 12 weeks and 6 months.
Pathway adherence & momentum: % milestones hit; project continuation rate; mentor fit score.
Equity & fairness: gap analysis by region/gender/school type; corrective actions logged.
2) 360° PEER FEEDBACK & REPUTATION GRAPH
Definition
Structured, rubric-based multi-source feedback after every sprint, aggregated into a living reputation graph (how the community experiences your thinking and collaboration).
Key idea
Top performers grow via social calibration: specific, timely, behavior-level feedback that sharpens metacognition and team effectiveness. Make feedback a skill, not a vibe.
What it consists of (points)
Aligned rubrics (the 12 areas condensed into observable behaviors: framing clarity, model quality, evidence use, iteration velocity, constraint fluency, synthesis depth, team contribution, foresight, tool fluency, reflection discipline).
Sprint cadence: quick 360s (teammates + mentor + self) after each sprint; guidance prompts ensure specificity + actionability.
Calibration workshops: short trainings on giving/receiving feedback; bias checks; psychological safety norms.
Signal processing: AI-assisted clustering (themes, deltas, outliers) → a Feedback Digest with “one lever” to pull next sprint.
Opt-in public signals: badges/endorsements on artifacts; private critical notes remain private by default.
Core outputs
Reputation graph (time series of strengths + growth areas).
Feedback Digest (top strengths, one priority lever, examples).
Growth commitments (student-written, sprint-bound).
How we measure success
Feedback quality index: % actionable comments; specificity score; turnaround time.
Predictive power: feedback themes predict next-sprint improvements and conflict reduction.
Team health: conflict-to-decision time; re-teaming requests; peer NPS.
Fairness: variance/polarization checks; bias mitigation actions logged.
3) MENTOR GUILD & PRECISION COACHING
Definition
A curated network of operators/researchers/founders delivering just-in-time, milestone-based coaching tied to artifacts (not lectures).
Key idea
Targeted intervention at the right moment (framing, modeling, prototyping, evaluation, storytelling) collapses cycles and raises quality. Mentor time is precious — turn it into decision acceleration.
What it consists of (points)
Mentor onboarding & playbooks: role clarity (subject SME, process coach, red-team), ethics, feedback style, sample annotations.
Matching & “ask” marketplace: students post clear asks; algorithm + staff broker best fit (skill, domain, working style).
Mentor gates: short, high-leverage sessions at key phases (Problem Gate → Model Gate → Prototype Gate → Evidence Gate → Story Gate).
Annotated artifacts: mentors mark up briefs, models, code, analyses, decks; 24–48h async follow-ups for decisions/pivots.
Red-team reviews: structured challenge sessions to pressure-test assumptions and edge cases.
Core outputs
Decision logs (pivot/commit with reasons).
Annotated artifacts (before/after).
Network unlocks (intros, data access, pilot sites).
How we measure success
Time-to-decision: days from gate → pivot/commit.
Rework avoided: reduced iteration cycles to reach MVP/evidence.
Quality ratings: mentor + partner scores on rigor, clarity, feasibility.
Opportunity conversion: internships, pilots, grants triggered by mentor referrals.
Retention & satisfaction: mentor NPS; student coaching NPS.
4) PROJECT INCUBATOR + MICROGRANTS
Definition
A structured design-to-ship pipeline (6–8 weeks) that moves missions from concept to prototype/pilot, supported by small grants and guardrails.
Key idea
Students learn most by shipping to real users. Constrain scope, instrument learning, and require evidence. Microgrants buy tools/data/time; guardrails keep ethics and safety first.
What it consists of (points)
Sprint structure: Kickoff scoping → weekly standups → midpoint critique → user tests → evidence review → demo day.
Resources: microgrants (€100–€500), compute credits, data access, design/dev toolkits, compliance checklists (IRB-lite for user research).
Methods spine: problem framing → system/model → prototype → test plan → metrics → iterate → decision (scale/park/transfer).
Partner briefs (optional): real needs from startups/NGOs/municipalities; shared success metrics.
Documentation: working repo, decisions/rationale, learning report.
Core outputs
Prototype/MVP (product, policy brief, data tool, experiment).
Validated learning report (what was tried, evidence, next step).
Public artifact (repo/demo/poster/video) suitable for portfolio & partner review.
How we measure success
Shipping rate: % projects that reach MVP/pilot by demo day.
External validation: user adoption, partner sign-off, replication by another team/school.
Evidence quality score: clarity of metrics, test design, reflection on limits.
Continuation & leverage: % projects continued 8–12 weeks later; follow-on funding/pilots; publications or policy traction.
Cycle efficiency: time-to-first-user; iteration velocity (v1→v3).
5) NATIONAL HACKATHONS & GRAND CHALLENGES
Definition
Time-boxed, high-intensity challenges on real problems (climate, civic tech, health data, AI safety) that push students from framing → prototype → evidence → demo.
Key idea
Compressed cycles, real constraints, and public stakes create learning velocity and transfer of skills (WEF/OECD: collaboration + problem solving; Ashoka/Learning Reimagined: authentic missions).
What it consists of
Partner briefs & open data: municipalities, NGOs, labs, startups supply problem statements, datasets, constraints, and success criteria.
Tracks & guardrails: software, hardware, policy/memo, data science; ethics, privacy, and safety checklists.
Mentor stations: framing, modeling, prototyping, evaluation, and demo coaching at set checkpoints.
Judging rubric: impact, rigor, reproducibility, usability, continuation plan (not just “cool demo”).
Continuation on-ramp: post-event mini-grants, repo hygiene, partner follow-ups, and 30/60-day sprint slots.
Core outputs
Working demos/repos (code, hardware, or policy prototypes).
Evidence briefs (what was tested, metrics, limits, next step).
Partner-ready artifacts (decks, one-pagers, deployment plans).
How we measure success
Continuation rate: % teams still building at 30/60/90 days.
Adoption & pilots: partner MoUs, trials, policy citations.
Quality signals: reproducibility of repos, test coverage, user feedback.
Participation equity & excellence: regional/gender representation + prize-to-pilot conversion.
Community NPS: student/mentor/partner satisfaction and re-engagement.
6) AI & CODING ACCELERATOR (“BUILD WITH COPILOTS”)
Definition
A hands-on accelerator where students use AI as a thinking amplifier and code as a design language to automate tasks, build agents, run simulations, and ship tools.
Key idea
Tool fluency matters more than passive digital literacy (WEF/UNESCO). Train students to evaluate models, reduce hallucinations, reason with data, and turn ideas into running systems.
What it consists of
Cognitive AI: prompt design, retrieval workflows, chain-of-thought scaffolding, critique loops; model limits, bias, safety.
Data & code: notebooks (Python/JS), data cleaning, analysis, visualization, small APIs, basic testing.
Agentic builds: task automation, evaluators, guards; human-in-the-loop design.
Evaluation: factuality/precision, robustness tests, ablation of prompts/tools.
Ship cycles: weekly micro-automations for real users (school ops, community, partners).
Core outputs
Working utilities/agents (scripts, bots, dashboards).
Eval cards (metrics, failure modes, responsible-use notes).
Personal AI playbook (reusable prompts, pipelines, checklists).
How we measure success
Real automation: hours saved, tasks reliably completed.
Model quality: accuracy/factuality on held-out checks; robustness under perturbations.
Adoption: # users of a tool, retention, issues closed.
Code quality: review pass rate, tests, reproducibility.
Transfer: students reusing patterns across new problems.
7) IDEA COMMONS & COLLECTIVE BRAINSTORMS
Definition
An always-on ideation and reframing hub where students co-create, stress-test, and evolve ideas before committing scarce time and money.
Key idea
Great pipelines start with great questions. Community divergence → convergence cycles raise idea quality and kill weak concepts early (Ashoka: frame-shifting; Learning Reimagined: inquiry first).
What it consists of
Weekly rituals: five-framings, assumption busting, SCAMPER, analogy chains, rapid “pitch-back” sessions.
Problem-market fit checks: quick user/partner interviews, back-of-the-envelope models, success/kill criteria.
Idea backlog: tagged, versioned, searchable; open-by-default with attribution norms.
Selection gates: transparent green-light/park/kill decisions based on signal, not hype.
Feeder to incubator: green-lit ideas get slots, mentors, and microgrants.
Core outputs
Validated problem statements (clear stakes, users, constraints).
Short concept notes (1-pager spec + test plan).
Kill/green-light decisions with rationale and next steps.
How we measure success
Conversion: % ideas that become prototypes with credible traction.
Healthy kill rate: % ideas stopped early with evidence (saves time).
Speed to clarity: days from idea → decision.
Cross-pollination: # ideas co-created by students from different domains.
Participant cadence: active contributors per cycle and return rate.
8) COLLABORATIVE PROBLEM-SOLVING STUDIOS
Definition
Structured, team-based labs that model how real work works: roles, constraints, decisions, documentation, and stakeholder engagement.
Key idea
Collaboration is a trainable cognitive skill (OECD/WEF). With roles, artifacts, and conflict protocols, teams produce better reasoning and more durable solutions (Ashoka: empathetic teamwork).
What it consists of
Role rotation: facilitator, skeptic, modeler, integrator, storyteller; shared accountability.
Team contract & rituals: stand-ups, decision logs, retrospectives, “red-team” critiques.
Stakeholder work: interviews, co-design sessions, requirement mapping.
Artifacts: system maps, test plans, risk registers, evidence boards.
Conflict playbooks: mediation protocols, “steel-man” exercises, decision rules.
Core outputs
Team playbook (ways of working, norms, decisions, risks).
Solution artifacts (models, prototypes/pilots, evidence briefs).
Stakeholder feedback (user tests, partner comments) feeding the next iteration.
How we measure success
Delivery reliability: % teams hitting agreed milestones with usable artifacts.
Team effectiveness: peer/mentor rubric on communication, synthesis, and decision quality.
Conflict resolution: incidents resolved within SLA; reduced escalation.
Stakeholder satisfaction: partner/user ratings; continued collaboration.
Re-teaming demand: how often students are requested by others for future squads.
9) RESEARCH & DATA CLINICS
Definition
Methods coaching and review cycles that turn questions into credible evidence—from problem statement to model, data, analysis, and limits—using reproducible workflows.
Key idea
Great problem solvers don’t just “find data”; they design evidence. Clinically tight research habits (clear hypotheses, sound methods, transparent code, limits stated) multiply the quality of decisions and the portability of findings.
What it consists of
Question → Model → Data design: refine the research question; choose/justify the model (causal, descriptive, predictive); plan data requirements and collection/curation steps.
Reproducible workflows: version-controlled notebooks (e.g., Python/JS/Sheets), data dictionaries, provenance logs, and standardized folder structures.
Validity & bias checks: instrument reliability, sampling logic, missingness plans, confound identification, sensitivity analyses.
Ethics & privacy guardrails: consent patterns for user research, anonymization, storage practices, IRB-lite review for minors/public data.
Peer review & replication: clinic peers attempt to reproduce results; red-team challenges assumptions; revise and re-run.
Core outputs (few, high-value)
Clean dataset + code notebook (fully documented, runnable).
Mini pre-registration or analysis plan (what we’ll test and how).
Short “evidence brief” (findings, caveats, next tests).
How we measure success
Reproducibility rate: % of analyses that peers can re-run with same result.
Evidence quality score: clarity of design, validity checks, limitations stated.
Actionability: # of downstream decisions/pivots credibly informed by the brief.
External review: partner/supervisor ratings; acceptance into hackathon judging or pilot decisions.
Error reduction: decline in avoidable errors (leakage, p-hacking, misreads) across cycles.
10) SYSTEMS & POLICY LAB
Definition
A pathway that converts student work into implementable interventions—policies, pilots, or service designs—through stakeholder mapping, regulatory navigation, and impact modeling.
Key idea
Impact = idea × institutional fit. Moving from prototype to real-world change requires understanding incentives, rules, costs, risks, and accountability—then designing for adoption, not just invention.
What it consists of
Stakeholder & incentive maps: who wins/loses, decision rights, gatekeepers, informal power, potential coalitions.
Regulatory scan & feasibility: statutes, standards, data rules, procurement paths; identify minimal viable compliance.
Impact & cost models: outcomes, KPIs, cost-benefit, distributional effects; second-order consequence mapping.
Pilot design & evaluation: target site, scope, timeline, safeguards, measurement plan; consent/ethics reviews.
MoUs & handoffs: agreements with municipalities/NGOs/startups; escalation paths; governance for scale or sunset.
Core outputs
Policy brief or implementation memo (clear ask, rationale, evidence).
Pilot protocol (who/what/when/how measured).
Signed MoU / approval to proceed (or a documented block with next steps).
How we measure success
Pilots launched: % of proposals that reach a live pilot within a term.
Decision velocity: days from proposal → approval/decline with reasons.
Adoption & persistence: pilots that continue past initial window; scale decisions.
Stakeholder satisfaction: ratings from host orgs; repeat invitations.
Measured outcomes: movement on agreed KPIs; credible learning if goals aren’t met.
11) DESIGN FOR IMPACT: STORYTELLING & DEMO DAYS
Definition
A communication studio that turns complex work into compelling, evidence-backed narratives with live demos, so ideas land with users, partners, and decision-makers.
Key idea
If it isn’t understood, it can’t be adopted. Clear narrative + honest evidence + reliable demo = momentum. Communication is a problem-solving tool, not a cosmetic afterthought.
What it consists of
Narrative architecture: problem → stakes → insight → solution → evidence → next step; audience-specific versions (technical, policy, public).
Evidence visuals: minimal, truthful charts; uncertainty bands; before/after comparisons; failure modes disclosed.
Live demo craft: rehearsal under constraints (bad Wi-Fi, time-boxed); fallbacks; demo scripts that highlight the reasoning, not just the UI.
Q&A sparring: red-team questions, objection handling, “steel-man” counter-positions.
Media kit: one-pager, deck, 90-sec video, FAQ, artifact links (repo, brief, pilot plan).
Core outputs
Pitch deck + one-pager tailored to a real audience.
Demo video (or live demo script) proven reliable in rehearsal.
Public artifact post (site/blog/repo README) that others can act on.
How we measure success
Comprehension & recall: audience quiz/survey on problem, solution, and evidence.
Conversion: # of concrete follow-ups (pilots, intros, funding, internships).
Demo reliability: % demos executed without critical failure; quality under time limits.
Clarity ratings: partner/mentor scores on honesty, precision, and usefulness.
Amplification: views/shares from relevant stakeholders; inbound interest over 30/60 days.
12) INTERNSHIP & RESIDENCY EXCHANGE
Definition
Short, scoped placements with startups, labs, media, and public agencies where students deliver real artifacts under dual mentorship (host + Imagine Lab).
Key idea
Situated practice with real stakes compresses learning. When deliverables matter to someone else, problem-solving muscles (framing, modeling, iteration, communication) grow fast.
What it consists of
Matching & scoping: capability × host need; a tight scope doc (problem, constraints, deliverables, acceptance criteria).
Dual mentorship: day-to-day host supervisor + Lab coach for process and reflection.
Milestone cadence: kickoff, mid-point review, final handover, and retro.
Professional guardrails: IP, data/privacy, safety, equity; stipend or credits; conflict resolution path.
Showcase: end-of-residency artifact demo and written reflection.
Core outputs
Accepted deliverable (code, analysis, design, content, brief) meeting spec.
Supervisor evaluation (rubric across the 12 areas where applicable).
Residency reflection (what changed in how I solve problems; next goals).
How we measure success
Host NPS & re-engagement: would they host again? repeat placements?
Deliverable acceptance: % artifacts meeting spec on time; rework needed.
Career lift: offers extended, continued collaboration, references earned.
Student growth: pre/post rubric movement on targeted capabilities.
Network effects: # of intros, pilots, or grants triggered by residency work.
13) PERFORMANCE & MINDSET COACHING
Definition
A coaching track that builds elite habits for focus, resilience, deliberate practice, and reflective judgment—so students execute hard things without burning out.
Key idea
Skill compounds only when powered by consistent execution. Teach teens to run themselves like high-performance learners: clear goals, deep-work routines, bias awareness, recovery, and honest post-mortems.
What it consists of
Operating system setup: weekly cadence boards (goals → tasks → time blocks), “one big lever” focus, end-of-week retros (wins, frictions, fixes).
Deep-work & attention: distraction audits, 50–90-minute focus blocks, phone hygiene, cue–routine–reward redesign.
Cognitive tools: bias spotters (confirmation, sunk-cost, availability), pre-mortems, checklists for complex tasks.
Resilience & recovery: stress toolkits (breathwork, micro-breaks), sleep basics, sustainable effort cycles; reframing failure as information.
Feedback muscles: how to ask for, parse, and act on tough feedback; “steel-man” other views; conflict as information.
Core outputs
Personal Operating Manual (POM)—the student’s written playbook for getting hard work done well.
Weekly cadence board & retrospective logs.
Bias & failure playcards (personal “watch-outs” + counter-moves).
How we measure success
Execution reliability: % weeks where priority commitments are met.
Iteration velocity: average days from v1→v2→v3 on live projects.
Feedback utilization: measurable changes linked to specific critiques.
Wellbeing balance: self-reported energy/sleep; sustained output without burnout over 8–12 weeks.
14) FOUNDER & GRANTS FAST-TRACK
Definition
A fast lane for students ready to found a venture or mission-driven initiative now—covering incorporation, IP, customer discovery, lightweight governance, and non-dilutive funding.
Key idea
Some teens are already builders. Give them just-in-time scaffolds to move from idea → users → legal structure → early money—while keeping ethics, safety, and learning at the core.
What it consists of
Idea → user proof: problem interviews, smoke tests/landing pages, simple traction metrics; define “who hurts enough to care?”.
Entity basics: choose path (association/NGO, s.r.o., project under fiscal sponsor), IP & contributor agreements, minimal bookkeeping.
Go-to-market & delivery: scope v0.1 offering, tiny SLA, first five users/customers/beneficiaries.
Funding track: microgrants, Czech/EU youth & innovation grants, sponsorship, early revenue; grant calendar, boilerplates, budgets.
Governance & ethics: lightweight board/mentor circle, risk register, data/privacy practices, safeguarding policies.
Core outputs
Lean venture dossier: problem brief, user proof, basic model, 90-day plan.
Legal footing: incorporation or fiscal-sponsorship agreement; contributor/IP templates.
Funding artifacts: two grant submissions or sponsor pitches; one live pilot with real users.
How we measure success
Launch rate & time: % ventures legally set up within 6–8 weeks.
Early traction: first users/customers; retention or NPS after 30–60 days.
Funding conversion: grants/sponsorships won; € raised; matching in-kind support.
Governance health: on-time reporting, risk logs maintained, safeguarding compliance.
15) PORTFOLIO, BADGES & NATIONAL TALENT PASSPORT
Definition
A verified, public-facing record of shipped work and demonstrated capabilities mapped to the 12 areas—portable across schools, partners, and future employers.
Key idea
Grades don’t show what you can build. A living portfolio—auto-collecting artifacts, mentor verifications, and peer badges—creates signal-rich proof of problem-solving power.
What it consists of
Auto-ingest & curation: pull repos, briefs, demos, data notebooks; students add reflections (goal → method → evidence → limits).
Verification layer: mentor/partner endorsements tied to specific artifacts and rubrics (framing, modeling, evidence, iteration, collaboration…).
Badge taxonomy: micro-credentials aligned to the 12 areas + tool fluency (e.g., “Systems Modeling—Advanced”, “Responsible AI—Evaluator”).
Talent Passport: concise, shareable profile (top artifacts, verified skills, testimonials, contact/availability).
Privacy & consent: student controls visibility; minors’ safety defaults; export anytime.
Core outputs
Public portfolio page with 5–8 high-quality artifacts and reflections.
Verified badges & endorsements mapped to the 12 areas.
Talent Passport suitable for partners, residencies, and scholarships.
How we measure success
Market signal: views → invites; conversion to interviews, residencies, or paid briefs.
Artifact quality trend: mentor rubric scores rising across terms.
Utilization: % active profiles updated per month; outbound shares.
Equity: access/use by region/school type; support given to close gaps.
16) PARTNER NETWORK & OPPORTUNITY MARKETPLACE
Definition
A two-sided platform where companies, labs, NGOs, media, and municipalities post real problem briefs; student squads bid, deliver, and learn—under ethics and quality guardrails.
Key idea
Real problems create real learning. Curate a pipeline of authentic briefs with clear specs, constraints, and review cycles; pay small bounties/microgrants; ensure reproducible, ethical delivery.
What it consists of
Brief templates & SLAs: problem, context, success criteria, data access, constraints, deliverables, timeline, review points.
Matching & squad formation: capability tags from Talent Passports; staff + algorithmic matching; onboarding to partner norms.
Delivery spine: kickoff, midpoint review, evidence check, final handover; decision logs and risk registers maintained.
Quality & ethics: reproducibility checks, privacy/safety, accessibility; red-team reviews for high-stakes work.
Payments & recognition: bounties/microgrants, certificates, testimonials; opt-in public showcases.
Core outputs
Completed briefs with partner acceptance (code, analysis, memo, prototype).
Evidence briefs (what worked, limits, next steps).
Testimonials/MoUs for continued collaboration.
How we measure success
Fill & completion rates: % briefs matched within 2 weeks; % delivered on time to spec.
Cycle time & quality: median days from kickoff → acceptance; reproducibility and partner quality scores.
Retention & growth: repeat partners; increased brief complexity/value over time.
Student benefit: paid bounties, follow-on opportunities, skill deltas tied to briefs.
Safety & compliance: zero critical incidents; all work passes ethics/privacy checks.