Snap Revels Advanced AR with Enhanced Generative AI and Realistic Effects
In the realm of augmented reality (AR), which superimposes digital effects on real-world images or videos, Snap has been a trailblazer. The firm is banking that creating more sophisticated and funny special effects, or lenses, would draw new users and advertisers to Snapchat, even if it is still far smaller than competitors like Meta.
In an effort to stay ahead of competitor social media platforms, Snapchat owner Snap on Tuesday unveiled its most recent version of generative Al technology, which would enable users to experience more lifelike extra effects while using phone cameras to record themselves.
In the realm of augmented reality (AR), which superimposes digital effects on real-world images or videos, Snap has been a trailblazer. The firm is betting that creating more sophisticated and amusing special effects, known as lenses, would draw new users and advertisers to Snapchat, even if it is still far smaller than competitors like Meta.
According to the business, AR developers may now produce Al-powered lenses, which Snapchat users can include into their content.
Based in Santa Monica, California Additionally, Snap unveiled an enhanced version of Lens Studio, its developer tool, which allows developers and artists to create augmented reality features for Snapchat as well as other websites and apps.
Chief technical officer of Snap, Bobby Murphy, claimed that the improved Lens Studio will generate more complicated work in a matter of hours rather than weeks when creating augmented reality effects.
“What’s fun for us is that these tools both stretch the creative space in which people can work, but they’re also easy to use, so newcomers can build something unique very quickly,” Murphy stated in a recent interview.
A new set of generative AI tools, such as an AI assistant that can respond to inquiries from developers in need of assistance, are now included in Lens Studio. Another tool will eliminate the requirement for artists to create a 3D model from scratch by allowing them to input a prompt and have an automatically generated three-dimensional image for their AR lens.
Previous iterations of AR technology were limited to basic effects, such as overlaying a hat on a person’s head in a video. According to Murphy, Snap’s innovations will now enable AR developers to produce more lifelike lenses, such as ones that have the hat move in unison with a person’s head and blend in with the scene’s illumination.
Murphy also mentioned that Snap intends to develop complete body AR experiences, as opposed to merely face ones, including creating a new wardrobe, which is a very challenging task at the moment.
Leave a Reply