Deep-tech tools without code will help future artists create better visual content

This article is contributed by Abigail Hunter-Syed, Partner at LDV Capital.

Despite the hype, the “creative economy” is not new. It has existed for generations and has primarily dealt with physical goods (ceramics, jewelry, paintings, books, photos, videos, etc.). Over the last two decades, it has become predominantly digital. The digitalization of creation has triggered a massive shift in content creation, with everyone and their mother now creating, sharing and participating online.

The vast majority of the content created and consumed on the Internet is visual content. In our latest Insights report at LDV Capital, we found that by 2027, there will be at least 100 times more visual content in the world. The future creative economy will be driven by visual technological tools that will automate various aspects of content creation and remove the technical skills from digital creation. This article discusses the results of our most recent insight report.

Group of superheroes on a dark background

Above: © LDV CAPITAL INSIGHTS 2021

Image credit: © LDV CAPITAL INSIGHTS 2021

We now live as much online as we do personally, and as such we participate in and generate more content than ever before. Whether it’s text, images, videos, stories, movies, livestreams, video games or anything else seen on our screens, it’s visual content.

At the moment, it takes time, often years, with prior training to produce a single piece of quality and context-relevant visual content. Typically, it has also required deep technical expertise to be able to produce content at the speed and quantities required today. But new platforms and tools powered by visual technologies are changing the paradigm.

PC vision will help with livestreaming

Livestreaming is a video recorded and broadcast in real time over the Internet, and it is one of the fastest growing segments of online video, expected to be a $ 150 billion industry by 2027. Over 60% of people aged 18 to 34 Watch live streaming of content on a daily basis, making it one of the most popular forms of online content.

Gaming is the most prominent live streaming content today, but shopping, cooking and events are growing rapidly and will continue on that path.

The most successful streamers today spend 50 to 60 hours a week on live streaming and many more hours on production. Visual technology tools that leverage computer vision, sentiment analysis, overlay technology and more will help with livestream automation. They will make it possible to analyze streamers feeds in real time to add production elements that improve quality and cut down on the time and technical skills required of streamers today.

Synthetic visual content will be ubiquitous

Much of the visual content we see today is already computer-generated graphics (CGI), special effects (VFX) or modified by software (eg Photoshop). Whether it’s the Army of the Dead in Game of Thrones or an altered image of Kim Kardashian in a magazine, we see content everywhere that is digitally designed and altered by human artists. Now computers and artificial intelligence can generate images and videos of people, things, and places that never physically existed.

In 2027, we will see more photorealistic synthetic images and videos than those documenting a real person or place. Some experts in our report even project that synthetic visual content will be almost 95% of the content we see. Synthetic media uses generative adversarial networks (GANs) to write text, take photos, create game scenarios, and more using simple human prompts, such as “write me 100 words about a penguin on top of a volcano.” GANs are the next Photoshop.

L: Remedy drawing created, R: Landscape image built by NVIDIA's GauGAN based on the drawing

Above: L: Remedial drawing created, R: Landscape image built by NVIDIA’s GauGAN based on the drawing

Image credit: © LDV CAPITAL INSIGHTS 2021

In some cases, synthesizing objects and people will be faster, cheaper, and more inclusive than hiring models, finding locations, and making a complete photo or video recording. In addition, it will allow video to be programmable – just as simple as making a slide deck.

Synthetic media that utilize GANs are also capable of personalizing content almost instantly and therefore allow any video to speak directly to the viewer using their name or writing a video game in real time while a person is playing. The gaming, marketing and advertising industries are already experimenting with the first commercial uses of GANs and synthetic media.

Artificial intelligence will deliver movement capture to the masses

Animated video requires expertise as well as even more time and budget than content starring individuals. Animated video typically refers to 2D and 3D cartoons, motion graphics, computer-generated images (CGI), and visual effects (VFX). They will be an increasingly important part of the content strategy for brands and companies implemented across image, video and livestream channels as a mechanism to diversify content.

Graph showing motion capture landscape

Above: © LDV CAPITAL INSIGHTS 2021

Image credit: © LDV CAPITAL INSIGHTS 2021

The biggest obstacle to generating animated content today is the skill – and the resulting time and budget – needed to create it. A traditional animator typically creates 4 seconds of content per second. working day. Motion capture (MoCap) is a tool often used by professional animators in movies, television and games to record a physical pattern of a person’s movements digitally for the purpose of animating them. An example could be something along the lines of recording Steph Curry’s jump shot for the NBA2K

Advances in photogrammetry, deep learning and artificial intelligence (AI) enable camera-based MoCap – with few or no suits, sensors or hardware. Recording facial movements has already come a long way, as evidenced by some of the incredible photo and video filters out there. As the features evolve into full-body recording, it will make MoCap easier, faster, budget-friendly and more accessible for animated visual content for video production, live streaming of virtual characters, games and more.

Almost all content will be gamified

Gaming is a massive industry that will hit nearly $ 236 billion globally by 2027. It will expand and grow as more and more content introduces gamification to promote interactivity with content. Gamification is to apply typical elements of games, such as points scoring, interactivity, and competition to promote engagement.

Games with non-game-like goals and more diverse storylines allow games to appeal to a wider audience. With a growth in the number of players, diversity and hours spent playing online games will drive a high demand for unique content.

AI and cloud infrastructure capabilities play a big role in helping game developers build tons of new content. GANs will gamify and personalize content, engage more players, and expand interactions and community. Games as a Service (GaaS) will become a major business model for gaming. Game platforms are leading the growth of immersive online interactive spaces.

People will interact with many digital beings

We want digital identities to produce, consume and interact with content. In our physical lives, people have many aspects of their personality and represent themselves differently under different circumstances: the boardroom vs the bar, in groups vs alone, etc. Online, the old school AOL screen names have already evolved into profile pictures, memojis, avatars, gamertags and more. Over the next five years, the average person will have at least 3 digital versions of themselves both photorealistic and amazing to participate in online.

Five examples of digital identities

Above: © LDV CAPITAL INSIGHTS 2021

Image credit: © LDV CAPITAL INSIGHTS 2021

Digital identities (or avatars) require visual technology. Some will enable public anonymity for the individual, some will be pseudonyms, and others will be directly linked to physical identity. An increasing number of them will be powered by AI.

These autonomous virtual beings will have personalities, emotions, problem-solving abilities and more. Some of them will be programmed to look, sound, act and move like an actual physical person. They will be our assistants, colleagues, doctors, dates and more.

Interaction with both human-driven avatars and autonomous virtual beings in virtual worlds and with gamified content sets the stage for the emergence of the meta-verse. Metaverse could not exist without visual technology and visual content, and I will elaborate on that in a future article.

Machine learning will curate, authenticate and moderate content

In order for creators to continuously produce the amounts of content needed to compete in the digital world, a number of tools will be developed to automate the repackaging of content from long to short form, from videos to blogs or, conversely, social posts, and more. These systems will themselves select content and format based on the performance of previous publications using automated analysis from computer vision, image recognition, sentiment analysis and machine learning. They will also inform the next generation of content to be created.

To then filter through the vast amount of content most effectively, autonomous curation bots powered by smart algorithms will see through and present us with content tailored to our interests and aspirations. Eventually, we will see personalized synthetic video content that replaces text-heavy newsletters, media, and emails.

In addition, the abundance of new content, including visual content, will require ways to authenticate it and attribute it to the creator both for rights management and handling of deep counterfeits, fake news and more. By 2027, most consumer phones will be able to authenticate content via applications.

It is deeply important to also discover disturbing and dangerous content, and it is becoming increasingly difficult to do given the large amounts of content that have been published. AI and computer vision algorithms are needed to automate this process by detecting hate speech, graphic pornography and violent attacks because it is too difficult to do manually in real time and not cost effective. Multimodal moderation that includes image recognition, as well as voice, text recognition and more, will be required.

Visual content tools are the biggest opportunity in the creative economy

The next five years will see individual creators utilizing visual technological tools to create visual content that compete with professional production teams in the quality and quantity of the content they produce. The biggest business opportunities today in Creator Economy are the visual technology platforms and tools that will allow these creators to focus on the content and not on the technical creation.

Abigail Hunter-Syed is a partner at LDV Capital, which invests in people building companies powered by visual technology. She thrives on collaborating with deep, technical teams that leverage computer vision, machine learning and AI to analyze visual data. She has more than ten years of experience leading strategy, operations and investments in companies across four continents and rarely says no to soft-serve ice cream.

DataDecision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including the technical people who perform data work, can share data-related insights and innovation.

To learn about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join DataDecisionMakers.

You might even consider contributing your own article!

Read more from DataDecisionMakers

Leave a Comment