Skip to content

Visual Intelligence System · v1.0.0 · 2026-05-13

Drop images.
Get a per-platform strategy.

The substrate that connects your image catalog to your content calendar. Three asset tiers, nine platform personas, one workflow. Built on the systems already shipped at /watch, /workshops, /library, and /intelligence-system.

The problem this solves

Generation without strategy is noise.

Every creator with NB2 access can generate a beautiful image. Few can answer the next question: which platform, which post, which hook, which variant. The gap between the image and the strategy is where most production capacity is lost.

VIS closes that gap with three commitments: every platform gets one persona; every asset routes through one of three tiers; every ship walks the same gate. The system refuses anything generic — the cost of refusing is small, the cost of shipping generic is the rest of your year.

The three-layer stack

What gets made. How it gets assembled. What stops bad work from shipping.

Each layer has a single job. The boundaries are deliberate — without them, the system collapses into one tool fighting another.

Asset Layer

What gets made. Two velocity tiers: NB2 for premium hero work; Higgsfield for the cinematic and product-grade volume tier.

  • nb-image

    Premium hero + book + album covers

    Hero imagery, book covers, album covers, anything that lives at the top of a page or stays on a feed for years.

  • higgsfield-product-photoshoot

    Brand-grade product and lifestyle photography

    Workshop landing visuals, course covers, papa-hub imagery, anything that needs studio-shot polish without a studio.

  • higgsfield-generate

    Cinematic stills and 5–15s video clips

    B-roll, cinematic cutaways, ad creatives, image-to-video animations, quote cards. The high-velocity volume tier.

  • higgsfield-soul-id

    Identity-faithful Frank-as-character training

    One-time training. After training, every higgsfield-generate call with --soul-id <id> produces faithful Frank-as-character output for B-roll where on-camera is impossible.

  • higgsfield-marketplace-cards

    Marketplace-compliant product listings

    Course Lemon Squeezy listings, Etsy/Amazon-style product cards, A+ content modules.

  • music-video-batch

    Lyric video generation across the 12k catalog

    Batch lyric videos for music releases. Composes with Higgsfield video for cinematic sections.

Composition Layer

How assets become videos. HyperFrames for caption-heavy short-form; Remotion for stateful long-form; the existing /talking-head-ship pipeline anchors brand standards.

  • hyperframes

    HTML-native composition with deterministic seek

    Caption-heavy shorts, Reels, TikTok, lower-third overlays, Lottie/After Effects imports.

  • remotion

    React-native composition with full state

    Long-form talking-head, branded YouTube uploads, anything reusing existing React components.

Gate Layer

What stops bad work from shipping. The vis curator decides which asset for which surface; visual-creation enforces the 6-step quality pipeline; brand-voice + design.md + taste.md are the final wall.

  • vis

    Curation — which asset for which page or platform

    Before generating: ask vis what already exists. Before publishing: ask vis whether the asset matches the page conversion goal.

  • visual-creation

    Quality pipeline — 6-step organic-first generation

    Every generation that will ship publicly. Council review and human approval are non-skippable.

  • brand-voice

    Voice + AI-tone refusal list

    Every caption, alt text, and on-image copy line.

The nine platform personas

One persona per platform. A profile that tries to be everything is read as nothing.

These are not aspirations — they are the operational rule used by content-social-distributor, the visual-intelligence skill, and the /visual-strategy command. Edit the substrate, every downstream surface updates.

LinkedIn

tech

AI Architect — the Oracle EMEA bridge story

First-person, technically precise, enterprise-fluent. The Oracle CoE framework translated to personal use, free.

Visual treatment
Cinematic stills with negative space for thread overlay, system diagrams in JetBrains Mono, carousel-format teardowns of real architectures.
Cadence
3 posts per week — 1 long-form thread, 1 short insight, 1 carousel teardown.
Asset source
nb-imagehiggsfield-generatehiggsfield-product-photoshoot
Forcing function
NLDigital 2026-05-19 / Madrid 2026-05-27 workshops drive demand thread frequency.

YouTube (long-form)

tech

The Builder — behind-the-scenes of shipped systems

Studio voice. Walk-throughs of ACOS, IIS, Watch OS, Workshop OS. Show the code, narrate the why.

Visual treatment
Talking-head A-roll + Soul-ID Frank for B-roll cutaways where you cannot be on camera + JetBrains code overlays.
Cadence
1 video every 10–14 days. Quality over rhythm. Each ships with a paired blog post.
Asset source
remotionhiggsfield-soul-idhiggsfield-generate

YouTube Shorts

tech

The Creator — one insight, sub-60s, captioned

Tight. One idea per short. The hook is the title.

Visual treatment
HyperFrames composition with caption-style block + Higgsfield 5–15s cinematic cutaway + brand lower-third.
Cadence
5 per week. Batch on Sunday from /watch/shorts pipeline.
Asset source
hyperframeshiggsfield-generate

TikTok

tech

The Creator — same soul as Shorts, different syntax

Faster, looser, lower-fi. Caption-on, sound-on, vertical native.

Visual treatment
HyperFrames composition tuned for TikTok caption ceiling + native font weight + Higgsfield Seedance for motion-heavy clips.
Cadence
5 per week, repurposed from Shorts cuts but with TikTok-native captions and sound choices.
Asset source
hyperframeshiggsfield-generate

Instagram

soul

The Aesthete — the 12k music catalog and generative art

Visual-first. Caption is liner notes, not pitch. Soul spectrum dominant — this is the music side of the brand.

Visual treatment
NB2 hero covers + Higgsfield carousel variants (4–6 frames per post) + occasional Reels from HyperFrames.
Cadence
4 posts + 2 stories per week. Album drops trigger Reel pipeline.
Asset source
nb-imagehiggsfield-product-photoshootmusic-video-batch

X

tech

The Thinker — tight insights, threaded teardowns

Condensed. Naval-density without the mysticism. Each tweet earns the next.

Visual treatment
Cinematic 4K stills as quote cards (Higgsfield Cinema Studio) + occasional Code-screenshot embeds + diagrams from /studio/visual.
Cadence
Daily. 1 thread per week.
Asset source
higgsfield-generatenb-image

Threads

tech

The Conversation-starter — lower-stakes, higher-frequency

Conversational. Questions over statements. Reply-bait done honestly.

Visual treatment
Mostly text. One quick Higgsfield generate per 3 posts maximum.
Cadence
5 per week. Repurposed X drafts that need more breathing room.
Asset source
higgsfield-generate

Bluesky

tech

The Live-thinker — mic-to-publish, the CIS MV1 surface

Voice-first. Captured via mic, lightly edited, published same-day.

Visual treatment
Quick-generate thumbnail per post. No deep production — speed is the point.
Cadence
Daily, per the CIS MV1 mic-to-publish workflow (Friday 2026-05-09 forcing function).
Asset source
higgsfield-generatenb-image
Forcing function
CIS MV1 first publish: 2026-05-09. Mic → transcript → Bluesky.

Spotify / Apple Music

soul

The Producer — the 12,000+ song catalog

No voice. The cover and the music carry it. Track titles are the only copy that matters.

Visual treatment
NB2 album/single covers (2K minimum, mimeType-derived ext per the 2026-04-25 regression rule) + Higgsfield cinematic visualizers for lead tracks.
Cadence
Per release. 1–4 covers per week depending on catalog batch.
Asset source
nb-imagehiggsfield-generatemusic-video-batch

The drop-images workflow

Five steps from raw image to ship-ready strategy.

Run via /visual-strategy, or dispatched automatically by the visual-intelligence-orchestrator agent on multi-image batches.

  1. 01

    Drop

    Paste images here, drop them in content/ingest/visual/, or upload a folder. The orchestrator reads each image multimodally.

  2. 02

    Analyze

    Subject, mood, lighting, composition, brand-fit, conversion potential, spectrum. Recorded as structured signals.

  3. 03

    Cross-reference

    IIS priorities, content calendar, hook-learn analytics, existing /vis registry. The image lands inside your strategy, not next to it.

  4. 04

    Recommend

    Per image, per platform: which persona, which hook pattern, which variants needed, which tool to fill the gap.

  5. 05

    Ship or fill

    If the image is ready: route through visual-creation gate to publish. If gaps exist: NB2 or Higgsfield queue with brand-tuned prompts.

The runtime

One command. One skill. One agent.

VIS is operated through the existing FrankX surfaces — Claude Code today, any MCP-aware client tomorrow. Nothing custom; everything composable.

Slash command

/visual-strategy

Run on a batch of images. Returns per-image, per-platform recommendations and a gap-fill plan.

Skill

visual-intelligence

Composes vis + brand-voice + social-media-strategy + nb-image + Higgsfield. Auto-triggers on visual-platform requests.

Agent

visual-intelligence-orchestrator

Multi-image batch handler. Sub-dispatches to vis, brand-voice, nb-image, and Higgsfield in parallel.

Example invocation

# In Claude Code, point at a folder of images
/visual-strategy content/ingest/visual/2026-w19/

# Or paste images directly into the conversation and ask
"Run visual-strategy on these. Bias toward LinkedIn (NLDigital workshop)
and Spotify (Friday album drop). Use Higgsfield for variants needed."

# Output: per-image analysis + per-platform plan + gap-fill brief
# Drafts land in content/staging/visual/<batch-id>/

Ready to drop images?

Open Claude Code, run /visual-strategy, and pass a batch of images. Or paste them straight into the conversation.

v1.0.0 · shipped 2026-05-13 · MIT-aligned with the Library OS pattern