A framework to enhance the safety of text-to-image generation networks

Overview of Latent Guard. Firstly, the team compiled a dataset of safe and unsafe prompts centered around blacklisted concepts (left). Then, they leveraged pre-trained textual encoders to extract features and map them to a learned latent space with their Embedding Mapping Layer (center). Only the Embedding Mapping Layer is trained, while all other parameters are kept frozen. The team trained it by imposing a contrastive loss on the extracted embedding, bringing closer the embeddings of unsafe prompts/concepts while separating them from safe ones (right). Credit: Liu et al.

The emergence of machine learning algorithms that can generate texts and …
Read more…….

Be the first to comment

Leave a Reply

Your email address will not be published.


*