【】

Apple is dabbling in AI image-editing with an open-source multimodal AI model.
Earlier this week, researchers from Apple and the University of California, Santa Barbara released MLLM-Guided Image Editing, or "MGIE;" a multimodal AI model that can edit images like Photoshop, based on simple text commands.
On the AI development front, Apple has been characteristically cautious about its plans. It was also one of the few companies that didn't announce any big AI plans in the wake of last year's ChatGPT hype. However, Apple reportedly has an in-house version of a ChatGPT-esque chatbot dubbed "Apple GPT" and Tim Cook said Apple will be making some major AI announcements later this year.
SEE ALSO:Tim Cook says big Apple AI announcement is coming later this yearWhether this announcement includes an AI image editing tool remains to be seen, but based on this model, Apple is definitely doing some research and development.
While there are already AI image editing tools out there, "human instructions are sometimes too brief for current methods to capture and follow," said the research paper. This often leads to lackluster or failed results. MGIE is a different approach that uses MLLMs, or multimodal large language models, to understand the text prompts or "expressive instruction," as well as image training data. Effectively, learning from MLLMs helps MGIE understand natural language commands without the need for heavy description.
In examples from the research, MGIE can take an input image of a pepperoni pizza and using the prompt, "make this more healthy" infer that "this" is referring to the pepperoni pizza and "more healthy" can be interpreted as adding vegetables. Thus, the output image is a pepperoni pizza with some green vegetables scattered on top.
Related Stories
- Apple Vision Pro teardown: What's inside the $3,500 headset
- Apple is working on a foldable clamshell iPhone, report says
- Apple Car may be coming much, much later than we hoped
In another example comparing MGIE to other models, the input image is a forested shoreline and a tranquil body of water. With the prompt "add lightning and make the water reflect the lightning," other models omit the lightning reflection, but MGIE successfully captures it.
MGIE is available as an open-source model on GitHub and as a demo version hosted on Hugging Face.
TopicsAppleArtificial Intelligence
相关文章
Olympian celebrates by ordering an intimidating amount of McDonald's
It's no secret that Olympians have to eat clean for years to ensure they're at peak physical conditi2025-02-28Want to enroll in Lamborghini's intensive driving school? All you need $12,000
There are plenty of ways to blow your money in Vegas.You could order a $10,000 Ono champagne cocktai2025-02-28Acer is done selling smartphones in the world's fastest growing market
Even as smartphone brands are charting plans to aggressively expand their businesses in India, Acer2025-02-28The 'Saved by the Bell' diner is coming to a city near you
After decades of waiting, Americans everywhere can finally head to the Max after school.Earlier this2025-02-28Researchers create temporary tattoos you can use to control your devices
In the future, your tattoos could be much more than just ink designs. 。Scientists have created a new2025-02-28Redditors planning to hide hundreds of 3D
Let's play a game -- and include an entire city.That's the idea behind one Reddit post that proposed2025-02-28
最新评论