BytePush Launches 1.58-bit FLUX Model for Efficient AI
ByteDance and POSTECH researchers have developed a groundbreaking 1.58-bit quantized FLUX model, achieving a 7.7 times reduction in memory usage while enhancing performance. This innovation promises to improve the deployment of text-to-image models on resource-constrained devices.