How to Refine AI-Generated Typefaces with Interface Sliders

cover
6 Aug 2025
  1. Introduction

  2. Related Work

    2.1 Semantic Typographic Logo Design

    2.2 Generative Model for Computational Design

    2.3 Graphic Design Authoring Tool

  3. Formative Study

    3.1 General Workflow and Challenges

    3.2 Concerns in Generative Model Involvement

    3.3 Design Space of Semantic Typography Work

  4. Design Consideration

  5. Typedance and 5.1 Ideation

    5.2 Selection

    5.3 Generation

    5.4 Evaluation

    5.5 Iteration

  6. Interface Walkthrough and 6.1 Pre-generation stage

    6.2 Generation stage

    6.3 Post-generation stage

  7. Evaluation and 7.1 Baseline Comparison

    7.2 User Study

    7.3 Results Analysis

    7.4 Limitation

  8. Discussion

    8.1 Personalized Design: Intent-aware Collaboration with AI

    8.2 Incorporating Design Knowledge into Creativity Support Tools

    8.3 Mix-User Oriented Design Workflow

  9. Conclusion and References

6 INTERFACE WALKTHROUGH

6.1 Pre-generation stage

6.2 Generation stage

6.2.2 Regenerating with appropriate strength. Recognizing the generated result is closer to the typeface, she deletes unwanted results and adjusts the design factor strength to 0.86 using the slider. In the subsequent round, she finds a desirable result and clicks it. The chosen design is then displayed in the central canvas.

6.3 Post-generation stage

6.3.1 Evaluating and Refining the generated result. To assess its legibility, Alice navigates to the right side of the canvas and clicks the [Evaluation] button in the EVALUATION. The current position of the result is situated on the imagery side of the slider with a value of 0.55. However, aiming to explore positions more aligned with the typeface side, she drags the slider to the left. After several trials, Alice obtains a series of results, as shown in Fig. 5.

Fig. 5. The interface of TypeDance, with a creator engaging in semantic typographic design. (a) In pre-generation, creator brainstorms for ideas and selects typeface and imagery as design materials. (b) During generation, creator sets generation options along with a prompt to personalize the design. (c) In post-generation, the creator evaluates and refines the design in the type-imagery spectrum.

Authors:

(1) SHISHI XIAO, The Hong Kong University of Science and Technology (Guangzhou), China;

(2) LIANGWEI WANG, The Hong Kong University of Science and Technology (Guangzhou), China;

(3) XIAOJUAN MA, The Hong Kong University of Science and Technology, China;

(4) WEI ZENG, The Hong Kong University of Science and Technology (Guangzhou), China.


This paper is available on arxiv under ATTRIBUTION-NONCOMMERCIAL-SHAREALIKE 4.0 INTERNATIONAL license.