--- name: hatch-pet description: Create, repair, validate, preview, and package Codex-compatible animated pet spritesheets from character art, screenshots, generated images, or visual references. Use when a user wants to hatch a Codex pet, create a custom animated pet, or build a built-in pet asset with an 8x9 atlas, transparent unused cells, row-by-row animation prompts, QA contact sheets, preview videos, and pet.json packaging. This skill composes the installed $imagegen system skill for visual generation and uses bundled scripts for deterministic spritesheet assembly. triggers: - "hatch a pet" - "hatch pet" - "codex pet" - "spritesheet pet" - "animated pet" - "孵化宠物" - "电子宠物" od: mode: image surface: image scenario: personal featured: 11 preview: type: image entry: final/spritesheet.png design_system: requires: false outputs: primary: final/spritesheet.png secondary: - final/spritesheet.webp - pet.json - qa/contact-sheet.png example_prompt: "Hatch me a tiny pixel-art shiba pet — friendly, sitting upright, with a small pomegranate prop. Use the hatch-pet skill end-to-end." upstream: "https://github.com/openai/skills/tree/main/skills/.curated/hatch-pet" --- # Hatch Pet > **Open Design integration.** This is the unmodified Codex `hatch-pet` skill, > vendored under `skills/hatch-pet/` so any Open Design agent can run it. After > the skill finishes packaging, the resulting `spritesheet.webp` (under > `${CODEX_HOME:-$HOME/.codex}/pets//`) can be imported into the > floating pet companion via **Settings → Pets → Import Codex sprite**. The > import flow auto-detects the 8×9 / `192×208` atlas and lets the user pick > which animation row to play (idle, running-right, waving, …). ## Overview Create a Codex-compatible animated pet from a concept, one or more reference images, or both. This skill owns pet-specific prompt planning, animation rows, frame extraction, atlas geometry, QA, previews, and packaging. It delegates visual generation to `$imagegen`. User-facing inputs are optional. If the user omits a pet name, infer one from the concept or reference filenames; if that is not possible, choose a short appropriate name. If the user omits a description, infer one from the concept or references. If the user omits reference images, generate the base pet from text first, then use that base as the canonical reference for every animation row. ## Generation Delegation Use `$imagegen` for all normal visual generation. Before generating base art, row strips, or repair rows, load and follow the installed image generation skill: ```text ${CODEX_HOME:-$HOME/.codex}/skills/.system/imagegen/SKILL.md ``` Do not call the Image API directly for the normal path. Let `$imagegen` choose its own built-in-first path and its own CLI fallback rules. If `$imagegen` says a fallback requires confirmation, ask the user before continuing. When invoking `$imagegen` from this skill, pass the generated pet prompt as the authoritative visual spec. Do not wrap it in the generic `$imagegen` shared prompt schema and do not add extra polish, hero-art, photo, product, or illustration-style augmentation. Pet prompts should stay terse, sprite-specific, and digital-pet oriented; only add role labels for input images and any essential user constraint. Use this skill's scripts for deterministic work only: preparing prompts and manifests, ingesting selected `$imagegen` outputs, extracting frames, validating rows, composing the final atlas, creating QA media, and packaging. Hard boundary: do not create, draw, tile, warp, mirror, or synthesize pet visuals with local Python/Pillow scripts, SVG, canvas, HTML/CSS, or other code-native art as a substitute for `$imagegen`. For a normal pet run, expect up to 10 visual generation jobs: 1 base pet plus 9 row-strip jobs. The only exception is `running-left`, which may be derived by mirroring `running-right` only after `running-right` has been generated, visually inspected, and explicitly approved as safe to mirror. If mirroring is not appropriate, generate `running-left` as a normal grounded `$imagegen` row. If those calls are too expensive, blocked, or unavailable, stop and explain the blocker instead of fabricating row strips locally. Do not mark visual jobs complete by editing `imagegen-jobs.json`, copying files into `decoded/`, or writing helper scripts that populate row outputs. Use `record_imagegen_result.py` for selected built-in `$imagegen` outputs, or `generate_pet_images.py` only for the documented secondary fallback. The deterministic scripts may only process already-generated visual outputs. Only the base job may be prompt-only. Every row-strip job generated through `$imagegen` must use the input images listed in `imagegen-jobs.json`, including the canonical base reference created after the base job is recorded. Treat any row generation without attached grounding images as invalid. ## Codex Digital Pet Style Default pet art should match the Codex app's built-in digital pets: small pixel-art-adjacent mascots with compact chibi proportions, chunky readable silhouettes, thick dark 1-2 px outlines, visible stepped/pixel edges, limited palettes, flat cel shading, simple expressive faces, and tiny limbs. Even if the reference art is more detailed, complex or realistic, the generated pet should be simplified into this style. Do NOT generate polished illustration, painterly rendering, anime key art, 3D rendering, glossy app-icon treatment, realistic fur or material texture, soft gradients, high-detail antialiasing, and complex tiny accessories. References that are more detailed than this should be simplified into the house style before row generation. ## Transparency And Effects Pet rows are processed into transparent 192x208 cells, so every generated pixel must either belong to the pet sprite or be cleanly removable chroma-key background. Prefer pose, expression, and silhouette changes over decorative effects. Allowed effects must satisfy all of these conditions: - The effect is state-relevant and helps explain the animation. - The effect is physically attached to, touching, or overlapping the pet silhouette, not floating nearby. - The effect is inside the same frame slot as the pet and does not create a separate sprite component. - The effect is opaque, hard-edged, pixel-style, and uses non-chroma-key colors. - The effect is small enough to remain readable at 192x208 without clutter. Examples of allowed effects: a tear touching the face, a small smoke puff touching the box or head, or tiny stars overlapping the pet during a failed/dizzy reaction. Avoid these by default because they usually break transparent-background cleanup or component extraction: - wave marks, motion arcs, speed lines, action streaks, afterimages, blur, or smears - detached stars, loose sparkles, floating punctuation, floating icons, falling tear drops, separated smoke clouds, or loose dust - cast shadows, contact shadows, drop shadows, oval floor shadows, floor patches, landing marks, impact bursts, glow, halo, aura, or soft transparent effects - text, labels, frame numbers, visible grids, guide marks, speech bubbles, thought bubbles, UI panels, code snippets, checkerboard transparency, white backgrounds, black backgrounds, or scenery - chroma-key-adjacent colors in the pet, prop, effects, highlights, or shadows - stray pixels, disconnected outline bits, speckle/noise, cropped body parts, overlapping poses, or any pose that crosses into a neighboring frame slot State-specific guidance: - `waving`: show the wave through paw pose only. Do not draw wave marks, motion arcs, lines, sparkles, or symbols around the paw. - `jumping`: show vertical motion through body position only. Do not draw shadows, dust, landing marks, impact bursts, bounce pads, or floor cues. - `failed`: tears, attached smoke puffs, or attached stars are allowed if they obey the allowed-effects rules; do not use red X marks, floating symbols, detached smoke, detached stars, or separate tear droplets. - `review`: show focus through lean, blink, eyes, head tilt, or paw position. Do not add magnifying glasses, papers, code, UI, punctuation, or symbols unless that prop already exists in the base pet identity. - `running-right`, `running-left`, and `running`: show locomotion through body, limb, and prop movement only. Do not draw speed lines, dust clouds, floor shadows, or motion trails. ## Pet Naming Ask the user for a pet name when they have not provided one and only if the conversation naturally allows it. If asking would slow down a direct execution request, choose a short appropriate name from the pet concept, reference image, or personality, then use that name consistently as the display name and as the source for the package folder slug. Good built-in style examples: - Codex - The original Codex companion. - Dewey - A tidy duck for calm workspace days. - Fireball - Hot path energy for fast iteration. - Rocky - A steady rock when the diff gets large. - Seedy - Small green shoots for new ideas. - Stacky - A balanced stack for deep work. - BSOD - A tiny blue-screen gremlin. - Null Signal - Quiet signal from the void. ## Visible Progress Plan For every pet run, keep a visible checklist so the user can see where the work is up to. Create the checklist before starting, keep one step active at a time, and update it as each step finishes. Before creating the checklist, establish the pet name when possible. Use the user-provided name when available; otherwise infer a short appropriate name from the concept or references. If the name is too long, not settled, or not appropriate for a friendly checklist, use `your pet` instead. Use this checklist for a normal pet run, replacing `` with the pet's name or `your pet`: 1. Getting `` ready. 2. Imagining ``'s main look. 3. Picturing ``'s poses. 4. Hatching ``. What each step means: - `Getting ready.` Choose or confirm the pet name, description, source images, and working folder. - `Imagining 's main look.` Generate the pet's main reference image. This is required for new pets, even when the user does not provide an image, because it becomes the visual source of truth. - `Picturing 's poses.` Create the pose rows, starting with `idle` and `running-right` to confirm the pet still looks consistent. Only mirror `running-left` if `running-right` clearly works when flipped. - `Hatching .` Turn the approved poses into the final pet files, review the contact sheet, previews, and validation results, fix any broken parts, save `pet.json` and `spritesheet.webp` into the pet folder, then tell the user where the pet and QA files were saved. Only mark a step complete when the real file, image, or decision exists. If this is just a repair run, start from the first relevant step instead of restarting the whole checklist. ## Default Workflow 1. Prepare a pet run folder and imagegen job manifest: ```bash SKILL_DIR="${CODEX_HOME:-$HOME/.codex}/skills/hatch-pet" python "$SKILL_DIR/scripts/prepare_pet_run.py" \ --pet-name "" \ --description "" \ --reference /absolute/path/to/reference.png \ --output-dir /absolute/path/to/run \ --pet-notes "" \ --style-notes "