Meta announced on Thursday the release of an AI model named Meta Motivo, designed to control the movements of a human-like digital agent, potentially enhancing the Metaverse experience.
The company has invested billions of dollars into AI, augmented reality, and other Metaverse technologies, raising its capital expenditure forecast for 2024 to a record $37 billion to $40 billion.
Meta has also been making many of its AI models available for free to developers, believing that an open approach will foster the creation of better tools for its services.
“We believe this research could lead to fully embodied agents in the Metaverse, creating more lifelike NPCs, democratizing character animation, and enabling new immersive experiences,” the company stated.
Meta Motivo aims to solve body control issues in digital avatars, allowing them to perform movements in a more realistic, human-like manner.
Additionally, Meta introduced a new training model for language processing called the Large Concept Model (LCM), which aims to “decouple reasoning from language representation.”
“The LCM marks a significant shift from traditional large language models. Instead of predicting the next token, the LCM predicts the next concept or high-level idea, represented by a full sentence in a multimodal and multilingual embedding space,” the company explained.
Meta also unveiled other AI tools, including Video Seal, which embeds an invisible watermark into videos that is traceable but not visible to the naked eye.