Skip to content
Flowwweb
OfferCasesNewsModelAboutContact
Contact
OfferCasesNewsModelAboutContact
Contact
Back to news

Dev field notes

The Font Problem Is Everything After the Image

Text-to-image models can sketch a typeface in seconds. Turning that sheet into a usable webfont is still a pipeline problem: grid discipline, extraction, outlines, metrics, preview, export, and the failure modes you cannot wish away.

Minimal watercolor illustration of a grid glyph sheet being folded into a single clean font file tile.

Author

Peik Gabriel

Published

May 13, 2026

7 min read

Contents

  1. 01The sheet is not the font
  2. 02Why AI font tools fail in public
  3. 03A font is software, not a picture
  4. 04Where it breaks first
  5. 05What builders should do differently

Filed under

typographyOpenTypeweb fontsAI tooling

The sheet is not the font

AI can generate a convincing alphabet sheet quickly. That is useful raw material, but it is not yet a type system. A font is a package of decisions: what each character maps to, how wide it is, where it sits on a baseline, what happens when letters touch, and whether the output survives real browsers, real rendering engines, and real text.

If you have ever tried to take a beautiful glyph image and turn it into a shipping webfont, you already know the gap. The work is not in the prompt. The work is in the pipeline that makes the output dependable: a strict grid, reliable extraction, outline generation, metrics, preview, and export that does not lie about quality.

“

The model can draw letters. The hard part is turning them into software.

Why AI font tools fail in public

Most AI typography demos stop at the most photogenic moment: the poster of letters. That is fine as a novelty, but it is a weak promise if the user expects a usable font file. Shipping a font means answering boring questions the demo avoids: which Unicode codepoints are supported, what the ascender and descender are, what the default advance width is, and how the font behaves when it is scaled, hinted, rasterized, or embedded.

Even a toy exporter makes the point. If you take a fixed-grid glyph sheet and convert each cell into coarse outline geometry, you can generate an OpenType font quickly. But the output instantly reveals the true work: baseline drift becomes unreadable text, inconsistent stroke weight becomes visual noise, and any lack of kerning makes the font feel broken the moment you type a real word.

Practical thesis

The real product is everything after generation.

Treat the post-image pipeline as the product. The font is not the art. The font is the data structure and the constraints that make the art usable.

A font is software, not a picture

OpenType is not a canvas format. It is a structured container for glyph outlines, metrics, and tables that rendering engines depend on. If your pipeline cannot consistently map glyphs to codepoints and metrics, you will generate files that look fine in a thumbnail and fail everywhere else.

This is also why the input constraints matter so much. A typeface is a system. The moment the sheet is sloppy, the exporter has to invent rules to fix it, and those inventions show up as drift. The fastest path to a usable result is usually not a better model. It is a stricter sheet.

The minimum pipeline that makes a font believable

  1. 1Constrain input: a strict grid, consistent baseline, consistent stroke weight, one glyph per cell.
  2. 2Extract each cell and decide ink vs background with a threshold that can be tuned per sheet.
  3. 3Generate outlines (vector paths), then set metrics (units per em, ascender/descender, advance widths).
  4. 4Preview in real text, not single letters; look for spacing, baseline, and weight drift immediately.
  5. 5Export and validate in real browsers; document limitations like missing kerning or rough outlines.

Where it breaks first

Kerning is the obvious cliff. A font that cannot adjust pair spacing will look wrong in normal text, even if individual letters are charming. Hinting, overshoots, and curve cleanup are the next cliffs: raster noise becomes vector junk, and the junk becomes jagged edges at common UI sizes.

Licensing and provenance are the quiet cliffs. If the workflow pulls from reference fonts or training data you cannot audit, you can end up with an unusable asset for commercial work. The pipeline has to make the legal story legible, not only the glyphs.

What builders should do differently

If you are building an AI typography tool, make the consequence visible: what makes the output usable. The feature is not “generate letters.” The feature is “generate a font file that survives real text.” That means investing in constraints, preview, metrics, validation, and honest limitations.

If you are evaluating one, ignore the prettiest demo output and ask for the pipeline: what input format is expected, how outlines are generated, what happens to spacing, what is supported beyond A–Z, and how the tool proves it is not shipping a broken file. The tools that win will feel less magical and more disciplined.

Research trail

5 sources

  1. Microsoft LearnOpenType specification
  2. W3CWOFF File Format 2.0
  3. opentype.jsopentype.js (OpenType font parsing and writing)
  4. MDN Web Docs@font-face
  5. FontForgeFontForge documentation
Flowwweb

Building the Future

Site

HomeOfferModel

Company

CasesNewsAboutContact

Contact

Tell us what needs to be built, fixed, or shipped next.

info@flowwweb.comOpen contact page

© 2026 Flowwweb. All rights reserved.

Bangkok to worldwide.