【】
Apple is dabbling in AI image-editing with an open-source multimodal AI model.
Earlier this week, researchers from Apple and the University of California, Santa Barbara released MLLM-Guided Image Editing, or "MGIE;" a multimodal AI model that can edit images like Photoshop, based on simple text commands.
On the AI development front, Apple has been characteristically cautious about its plans. It was also one of the few companies that didn't announce any big AI plans in the wake of last year's ChatGPT hype. However, Apple reportedly has an in-house version of a ChatGPT-esque chatbot dubbed "Apple GPT" and Tim Cook said Apple will be making some major AI announcements later this year.
SEE ALSO:Tim Cook says big Apple AI announcement is coming later this yearWhether this announcement includes an AI image editing tool remains to be seen, but based on this model, Apple is definitely doing some research and development.
While there are already AI image editing tools out there, "human instructions are sometimes too brief for current methods to capture and follow," said the research paper. This often leads to lackluster or failed results. MGIE is a different approach that uses MLLMs, or multimodal large language models, to understand the text prompts or "expressive instruction," as well as image training data. Effectively, learning from MLLMs helps MGIE understand natural language commands without the need for heavy description.
In examples from the research, MGIE can take an input image of a pepperoni pizza and using the prompt, "make this more healthy" infer that "this" is referring to the pepperoni pizza and "more healthy" can be interpreted as adding vegetables. Thus, the output image is a pepperoni pizza with some green vegetables scattered on top.
Related Stories
- Apple Vision Pro teardown: What's inside the $3,500 headset
- Apple is working on a foldable clamshell iPhone, report says
- Apple Car may be coming much, much later than we hoped
In another example comparing MGIE to other models, the input image is a forested shoreline and a tranquil body of water. With the prompt "add lightning and make the water reflect the lightning," other models omit the lightning reflection, but MGIE successfully captures it.
MGIE is available as an open-source model on GitHub and as a demo version hosted on Hugging Face.
TopicsAppleArtificial Intelligence
相关文章
Fake news reports from the Newseum are infinitely better than actual news
Actual investigative journalism: who needs it?At least, that's what some people will likely conclude2025-01-30- 粵媒:廣州隊缺少得分點 戰三鎮麵臨更嚴峻的考驗_首輪_對手_劉智宇www.ty42.com 日期:2022-06-07 11:01:00| 評論(已有346302條評論)2025-01-30
- 韓喬生:日韓在為世界杯練兵 中國足球卻因欠薪鬧得不可開交_日本_韓都_討債www.ty42.com 日期:2022-06-07 19:31:00| 評論(已有346360條評論)2025-01-30
- 石柯蹬踏多拉多被直紅罰下 泰山連續兩輪染紅_比賽_紅牌_黃牌www.ty42.com 日期:2022-06-08 19:01:00| 評論(已有346487條評論)2025-01-30
Aly Raisman catches Simone Biles napping on a plane like a champion
Simone Biles is exhausted. She won five medals at the Summer Olympics in Rio, posed for selfies with2025-01-30- 中超標王詳解任意球神技秘訣:每次訓練後加練10-15分鍾_斯坦丘_比賽_外援www.ty42.com 日期:2022-06-07 19:01:00| 評論(已有346338條評論)2025-01-30
最新评论