Existing cross-entropy loss based methods penalize
element misalignment heavily while preferential tuning via AAPA
better capture aesthetic nuances in layouts
Abstract
Visual layouts are essential in graphic design fields such as advertising, posters, and web interfaces. The application
of generative models for content-aware layout generation has recently gained traction. However, these models fail
to understand the contextual aesthetic requirements of layout design and do not align with human-like preferences,
primarily treating it as a prediction task without considering the final rendered output. To overcome these problems,
we offer Aesthetic-Aware Preference Alignment (AAPA),a novel technique to train a Multi-modal
Large Language Model (MLLM) for layout prediction that uses MLLM's aesthetic preferences for Direct
Preference Optimization over graphic layouts. We propose a data filtering protocol utilizing our layout-quality
heuristics for AAPA to ensure training happens on high-quality layouts.
Additionally, we introduce a novel evaluation metric that uses another MLLM to compute the win rate of the generated layout against
the ground-truth layout based on aesthetics criteria. We also demonstrate the applicability of AAPA for MLLMs of
varying scales (1B to 8B parameters) and LLM families (Qwen, Phi, InternLM). By conducting thorough
qualitative and quantitative analyses, we verify the efficacy of our
approach on two challenging benchmarks - Crello and Webui, showcasing 17%, and 16% improvement over current
State-of-The-Art methods, thereby highlighting the potential of MLLMs in aesthetic-aware layout generation.
Network architecture
The training for the aesthetic layout prediction task consists of the following steps:
1. Vision Encoder: Design elements (images and text) are processed to generate image and text embeddings.
2. AesthetiQ Model Prediction: Embeddings are passed to the AesthetiQ model, which predicts layout coordinates.
3. Training with Cross-Entropy Loss: The predicted layout is compared with the ground truth and trained using cross-entropy loss.
4. Sampling for Comparison: Multiple layout predictions are generated using AesthetiQ inference.
5. Pair Selection and Quality Filtering: We filter the data based on quality heuristics to ensure layout quality in samples.
6. Judging by ViLA: The ViLA model compares layout pairs and selects the better one based on aesthetic preferences.
7. Aesthetic Preference Optimization (AAPA): Feedback from ViLA is used to fine-tune the AesthetiQ model for aesthetic optimization.
Results
Qualitative comparison of our model, AesthetiQ, against recent methods FlexDM, LACE,
and LayoutNUWA. Despite the challenge of arranging numerous elements, AesthetiQ consistently achieves
superior layout quality. In row (a), AesthetiQ effectively places text within salient regions, maintaining clear
hierarchy and avoiding overlaps, which enhances readability and aesthetic appeal. In row (b), it
achieves precise alignment across elements and optimally positions diverse shapes, preserving a cohesive
visual structure. Row (c) showcases AesthetiQ's advanced semantic understanding, generating a visually
balanced and aesthetically pleasing layout. Overall, AesthetiQ consistently outperforms competitors
in creating coherent, well-structured designs that align with human aesthetic preferences.
Resources
Citation
@inproceedings{patnaik2025aesthetiq,
title={AesthetiQ: Enhancing Graphic Layout Design via Aesthetic-Aware Preference Alignment of Multi-modal Large Language Models},
author={Patnaik, Sohan and Jain, Rishabh and Krishnamurthy, Balaji and Sarkar, Mausoom},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={23701--23711},
year={2025}}