成人免费xxxxx在线视频软件_久久精品久久久_亚洲国产精品久久久_天天色天天色_亚洲人成一区_欧美一级欧美三级在线观看

The New DALL-E 2 System: When Surrealism Meets AI Algorithms

原創(chuàng) 精選
Techplur
With the help of artificial intelligence, DALL-E 2 shows how to observe and comprehend our world. Developing a system like this is crucial to creating valuable and safe artificial intelligence.

With the help of artificial intelligence, DALL-E 2 shows how to observe and comprehend our world. Developing a system like this is crucial to creating valuable and safe artificial intelligence.

Technological advances have shown great promise for artificial intelligence. AI is often astonishing in its malleability, from AlphaGo, the first program to beat the human world Go champion, to AlphaCode, an AI that can program autonomously. Yet there have also been controversies related to AI, such as the privacy issues involved in face recognition and the creation of fake news.

The new DALL-E 2 system from OpenAI has once again raised concerns. Using text, the artificial intelligence system is capable of automatically generating various images that are realistic-looking and sometimes possess a high degree of expressive power. However, behind this capability, concerns about bias and fakery have once again come to light. Technology is neutral, but human nature may not be.

Have you ever seen a teddy bear skateboarding in Times Square or an astronaut riding a horse in space?



DALL-E 2 lets you "see" the above images that do not exist in reality.

DALL-E 2 is the latest version of DALL-E, a text-generated image tool developed by the famous OpenAI, and an AI system that has recently broken into the spotlight.


DALL-E 2: Unprecedented image quality with new secondary creation

In January 2021, OpenAI created DALL-E, a system based on GPT-2/GPT-3 language model and CLIP image recognition system. Salvador Dali, a surrealist painter, and the main character of Pixar's animation, WALL-E, are known to have inspired the name DALL-E.

Compared with the first generation of DALL-E, DALL-E 2 is 71.7% more accurate and 88.8% more realistic, and the image quality has improved. For example, the first generation of DALL-E images has only 256 x 256 pixels, while DALL-E 2 pixels are up to 1024 x 1024 pixels, the resolution has improved, and the latency has decreased.

Using the same text description, "a painting of a fox sitting in a field at sunrise in the style of Claude Monet", the images created by the two generations of systems are remarkably different. Visually, the left side (DALL-E 1) looks like a crude cartoon, while the right side (DALL-E 2) is much sharper and shows a real oil painting quality.

(Image source: OpenAI official website)


Aside from this, DALL-E 2 includes two new features: "inpainting" and "variations," both of which are useful for editing and touching up images.

(Image source: OpenAI official website)


Inpainting is the process of changing a part of an existing image. In particular, DALL-E 2 can seamlessly blend an artificially generated image into an image, such as replacing a pillow on a couch with a puppy or putting a toy duck in a sink, with an image generated using AI.

Variation means taking an image as a base and recreating a new image with different angles and styles. The software allows users to upload an image and edit it in various ways, or overlay objects on two images to create a new image.

(Image source: OpenAI official website)


Breakthroughs and challenges: how to generate high-quality deepfakes

DALL-E 2 builds on a computer vision engine called CLIP, which has been trained on hundreds of millions of images and their tags to understand how well a given text snippet is associated with an image.

It is of interest to note that OpenAI created "unCLIP"—a system based on descriptions and intended for the generation of images—by iterating on CLIP and applying it to DALL-E 2. UnCLIP addresses some of the shortcomings of CLIP, where one can fool the system into misidentifying what one sees by using words that mean something else.

Suppose the user teaches the system an image of an airplane and tags it as "car," then when the user later wants to generate a picture of a "car," perhaps the system will create an image of an airplane. The process is similar to talking to someone who has learned the wrong word. In this regard, unCLIP performs much better than CLIP.

Moreover, the new system supports diffusion models, which start with a random pattern and gradually evolve into a picture once a more detailed description is identified. This can produce high-quality synthetic images, especially when combined with bootstrapping techniques, which trade off diversity for fidelity.

Even though text-to-image generation research has advanced, it has always been limited by anti-pattern issues, just as DALL-E 2 is similarly limited by blind spots in its training.

If, for example, you type in "monkey" into DALL-E 2, it will generate a number of images of monkeys when it understands the image with the correct label. However, if you type "howler monkey" as an unseen input, DALL-E 2 won't know it is a specific species name. Then, as DALL-E 2 thinks it's best, a monkey yelling picture will be presented. For now, DALL-E 2 has similar potential and limitations.


Technology has no good or evil; human nature may not stand tests

Interestingly enough, OpenAI has never fully revealed the model of DALL-E. As of now, users can preview the tool after creating an account on its website. Researchers have expressed the desire to continue with the phased process so that the technology can be released safely.

Even though the technology behind DALL-E 2 appears stunning, many have already noticed a hidden concern. Despite the bias the algorithm has been criticized for from the beginning, the "high quality" of the images can still be frightening. It can be used for good, but you can also use it for crazier things, such as deepfaking photos and videos."

In the same way that GPT-3 was allegedly used to fabricate fake news, tools like DALL-E 2 have the potential to be abused. With anyone being able to create eye-popping fake photos easily, we could feel no safety. Almost everything they see online should be taken with a grain of salt, and there's no question about that.

This technology has raised some concerns among the public. People were concerned that if this went public, they would be wise to turn off their TV and Internet altogether and avoid interacting with anyone who wished to tell them what they were viewing on the Web. If holograms become the norm, then we will be creating hell for future generations.

To address possible problems, such as image bias and misinformation, OpenAI has also anticipated and said it would continue building DALL-E.

Some of these measures include:

Images generated by DALL-E 2 will have a watermark to indicate they are AI-generated.

Data that has been screened out of toxic elements is used to train DALL-E 2. Therefore, its potential for producing harmful content has been reduced to an ideal level.

DALL-E 2 uses the anti-abuse feature to prevent it from generating random recognizable faces.

The tool forbids users from posting hate symbols, nudity, obscene gestures, or conspiracies about major current geopolitical events.

Users must explain the changes made by the AI to generate the images, and they cannot share the generated images on the website and software.

We don't know how effective this will be, but at least DALL-E 2 is locked up in a "deep box". It is essential to realize that DALL-E 2 offers the ability to express ourselves in previously near impossible ways. DALL-E 2 provides a realistic presentation of polar bears that play guitar, a Dalí-style sky garden, and the Mona Lisa with a mohawk that was previously unimaginable. Furthermore, DALL-E 2 shows people how an AI system can observe and understand our world, which is vital for developing practical and safe AI algorithms.

責(zé)任編輯:龐桂玉 來源: 51CTO
相關(guān)推薦

2022-08-17 15:52:24

AI

2023-04-07 08:13:35

AIChatGPTDall-E 2

2023-11-28 09:42:07

OpenAIWindows 11

2023-10-20 08:26:50

2023-08-02 16:09:16

2023-07-10 15:22:29

OpenAIGPT-3.5

2022-05-19 15:31:10

圖靈圖像測(cè)試

2025-02-10 08:20:00

OpenAISoraDALL-E 4

2023-10-13 22:01:42

谷歌模型

2025-06-10 16:38:06

AI圖像生成系統(tǒng)擴(kuò)散模型人工智能

2025-02-07 11:00:00

模型開源AI

2022-08-08 16:24:04

AI圖像研究

2023-09-21 07:54:22

人工智能DALL-E 3

2023-12-20 07:22:03

DALL-E 3微軟鍵盤

2022-12-25 14:08:10

AI模型

2022-08-23 13:45:45

圖像攻防

2021-12-23 09:54:02

AI模型人工智能

2023-10-04 20:09:16

微軟Bing Chat

2022-06-01 13:54:35

語言

2024-02-04 12:04:37

OpenAI微軟漏洞
點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)

主站蜘蛛池模板: 狠狠色综合欧美激情 | 久久精品亚洲一区二区三区浴池 | 狠狠久 | 欧美日韩视频在线 | 午夜精品一区二区三区在线视 | 久久欧美精品 | 黄色网址免费看 | 久久黄视频 | h片在线观看免费 | 九九精品视频在线 | 欧洲成人免费视频 | 日韩精品一区二区三区在线播放 | 在线观看av免费 | 国产一区二区精品在线观看 | 亚洲日韩视频 | 久久久免费少妇高潮毛片 | 在线一区| 亚洲国产成人精品女人久久久 | 欧美日韩一区二区三区四区 | 国产日韩一区二区三区 | 欧美亚洲国产一区 | 美女一级毛片 | 国产精品亚洲第一区在线暖暖韩国 | 国产精品一二三区在线观看 | 国产精品日韩一区 | 蜜桃日韩| av片免费| 欧美成人一区二区三区 | av大片| 亚洲成人中文字幕 | 成人免费大片黄在线播放 | 国产中文原创 | 中国美女撒尿txxxxx视频 | 一区日韩 | 久久精品久久久久久 | 亚洲 精品 综合 精品 自拍 | 日韩视频在线免费观看 | 97av在线 | 国产精品毛片一区二区在线看 | 亚洲精品白浆高清久久久久久 | 91极品视频|