Browser-first by default: the core session works in the browser. Local VM and advanced tooling are optional only.

Facilitator Run of Show

Minute-by-minute plan, demo prompts, and fallback instructions.

Minute-by-minute run of show

TimeSegmentFocus
0-10Arrival and setupcalm arrival, devices/browser ready, chargers out, optional Ethernet mention, expectations and reassurance
10-18Intro to AI and ChatGPTwhat AI is, what ChatGPT is, what it does well, what it gets wrong, safety and critical thinking
18-28Demo 1weak prompt vs improved prompt in ChatGPT
28-40Hands-on 1learners improve a guided starter prompt with low-choice support
40-45Short break / regroupbrief break and room reset
45-53Demo 2weak image prompt vs improved image prompt
53-63Hands-on 2learners create one image using guided scaffolding
63-70Demo 3short facilitator-only Codex demo
70-80Hands-on 3learners choose Guided, Build, Remix, or Stretch
80-87Reflectionwhat changed, what worked, what surprised you, what would you do differently next time
87-90Closingsafe use reminder, takeaways, resources, what happens next

Exact demo prompts

ChatGPT demo 1: study helper

Weak prompt: Help me study science.

Why weak: It does not name the topic, age, format, or what would make the answer useful.

Improved prompt: I am 15 and reviewing photosynthesis. Explain it in five short bullet points, give one everyday analogy, and end with two quiz questions.

Teaching point: Prompt quality often comes from clear purpose, audience, format, and limits.

Optional learner follow-up: Change the subject to geography or biology and keep the same structure.

ChatGPT demo 2: weekend planner

Weak prompt: Plan my weekend.

Why weak: It is too open-ended and gives no budget, location, style, or age context.

Improved prompt: Plan a low-cost Saturday for a teen in Vancouver with one outdoor idea, one indoor backup, and one snack stop. Keep the whole plan under $25 and list it as a simple timeline.

Teaching point: Specific constraints often make a response more practical.

Optional learner follow-up: Remix it for a rainy evening or a family outing.

ChatGPT demo 3: tone and audience rewrite

Weak prompt: Rewrite this email.

Why weak: It does not say who the email is for, how it should sound, or what to keep short.

Improved prompt: Rewrite this email so it sounds polite and clear for a teacher. Keep it under 120 words, make the ask obvious, and remove dramatic wording.

Teaching point: Audience and tone can matter as much as topic.

Optional learner follow-up: Change the audience to a team leader or event organizer.

Image demo 1: camp poster

Weak prompt: Make a camp poster.

Why weak: The tool has to guess the mood, style, colours, and layout.

Improved prompt: Create a friendly poster-style image for a scout campfire night at dusk with warm lantern light, navy and gold colours, pine trees in the background, and space at the top for a title.

Teaching point: Image prompts improve when the request includes scene, style, mood, and layout clues.

Optional learner follow-up: Remix it for a daytime family picnic poster.

Image demo 2: round sticker

Weak prompt: Make a cool sticker.

Why weak: Cool means different things to different people and gives no design direction.

Improved prompt: Design a round sticker with a mountain, a compass, and a small spark icon using bold flat colours, thick outlines, and a clean badge layout.

Teaching point: Style words and layout words help image tools narrow the result.

Optional learner follow-up: Remix it for a camp mug graphic or social tile.

Facilitator-only Codex demo

Weak prompt: Build me an app.

Why weak: It does not say what kind of app, what size, or what success looks like.

Improved prompt: Make one small HTML page for a scout checklist with a heading, three checklist items, one Complete button, and a short explanation of the changes. Keep it in one file.

Teaching point: The same prompt lesson applies to coding assistants: clearer requests reduce confusion.

Optional learner follow-up: Ask how the same prompt could be adapted for a bug fix.

Reflection and closing script

  • What changed after you improved the prompt?
  • What worked better than expected?
  • What still needs human checking?
  • Where can you use this safely after tonight?

Closing remarks: AI can help you think faster, but it should not replace judgment. Ask clearly, check what matters, and use the browser-first path as your default.

Fallback and recovery

  • If accounts fail, pair learners or switch to presenter-led examples.
  • If the image tool fails, compare prompts verbally and use the prompt cards only.
  • If internet access fails, keep the room together and use printed cards plus discussion.
  • If the pace slips, cut optional share-outs before cutting the core prompt-comparison pattern.