Search Phrase = training
Planning the Big View
The video introduces the concept of setting up a "home base" for your online business using AMP. This home base will include a main website, blog, lead magnets, online memberships, and capabilities for internal and external training. The goal is to provide a comprehensive platform where you can manage all aspects of your online presence and activities without needing to code.
Here are the key things you will be able to do after you watch this demo:
Plan the information architecture and site map for your home base website.
Implement a main marketing website, blog, and lead magnets using AMP.
Manage unlimited online memberships, courses, and subscription options within AMP.
Develop internal company training and external customer training programs on your AMP platform.
Integrate sales and marketing funnels into your AMP-powered home base.
Consolidate your online business activities into a single, code-free platform.
Setting Up Home Base and Information Architecture (0:00)
Josh Lomelino introduces the concept of launching a website and the importance of having a clear information architecture.
The main website will include key information, marketing messages, and contact details.
Josh mentions the possibility of consolidating various online elements like blogs and marketing sites into one central location.
The goal is to use AMP to manage all online business activities efficiently.
Components of Home Base (2:16)
Josh explains that home base will include the main website and a blog.
The blog will feature unlimited articles to attract traffic and address specific pain points.
Lead magnets, such as free resources, training, and digital downloads, can be used to drive traffic to the site.
Online memberships, both free and paid, can be managed through AMP, offering various classes and subscription models.
Handling Internal and External training (3:29)
Josh discusses the need for internal company training and external customer training.
A knowledge base is essential for providing key information and support.
AMP allows for the management of multiple memberships and classes, including one-time payments and ongoing subscriptions.
The platform supports a variety of training needs, from company training to customer training.
Sales and Marketing Integration (4:33)
Josh emphasizes the importance of integrating sales and marketing within AMP.
Marketing funnels are crucial for converting incoming traffic into sales.
Home base serves as the central operating system for all business activities, including digital and physical products.
The goal is to handle everything from company training to customer-facing training without the need for coding.
Planning and Organizing Content (6:35)
Josh outlines the process of planning and organizing the site, starting with a minimum viable product (MVP).
The next video will cover the basics of building a site map and thinking about the site's vision.
The approach will be incremental, starting small and building up from there.
The ultimate goal is to have a clear plan for content organization, implementation, and growth.
Anomaly Studios Done for you Services
Launching a product or service can be tough. You totally could implement each core piece of your marketing strategy directly into Anomaly AMP, leveraging our powerful marketing funnels, landing pages, website and blog tools, and online membership and sales features. But even with all those resources, attracting customers, and keeping up with the pace of content development and advertising implementation can feel like a real challenge.
We've been there. We understand the struggle. That's why my team is ready to help you build a solid foundation for growth. We'll partner with you to develop compelling pre-launch content, cultivate a targeted audience, and help you create a successful seed launch, setting you up for scalable success.
training/images/done-for-you-services-infrastructure-images.jpg" alt="Your Infrastructure" width="1690" height="1099" />
(view in landscape on mobile to see the stages in detail)
training/pdf/SMART%20Business%20Success%20Path.pdf" target="_blank" rel="noopener">
Here is a PDF download of your Success Path that you can print and put on your wall. This is our growth framework.These are the five phases of growth and this will help you see the big view. This is how the Anomaly Studios team can help you grow—strategically and systematically.
The Deliverables section at the bottom of the printout above shows each key step or deliverable in the growth sequence. Each phase has 3-5 action steps. Once completed we move on to the next phase and unlock increasing levels of growth. We figured this all out the hard way and now you get to benefit from the streamlined steps in our growth framework.
training/images/sbs-success-path-gif-animation.gif" alt="" width="1690" height="393" />
(view in landscape on mobile to see the stages in detail)
The PDF download above outlines a phased approach with detailed sprints and tasks to efficiently develop, launch, and scale the product offering. The PDF printout is organized in five phases from left to right. It includes deliverables such as an onboarding process to define the vision, business analytics and market research to validate the positioning, testing of marketing models and lead magnets, a pre-launch offer with revenue automation, and the final development of an optimized evergreen sales funnel. When we meet we can create the statement of work and define the roles and responsibilities ensure a collaborative and streamlined process. Each number and deliverable in teal is a key step or deliverable in the process to build sustainable and ongoing growth and traffic.
It’s important to know that you are working with the best when you work with Anomaly Studios. Our team is composed of award-winning designers, content strategists, video editors, traffic strategists, and more to help you bring your product to life and to market at whatever level you need us. Several of our project leads have one or more graduate degrees specifically in this space.
For example Ryun who took his own YouYube channel to 1.5 MM subscribers and over a billion views on youTube.
Or Nick, an award-winning instructional designer and writer who has crafted world-class content and educational experiences for some of the world’s top universities.
Claire is one of our visual designers who helps your products and user experience look amazing.
Shara is our data wizard that helps you architect your traffic channels and affiliate partners research.
And you can work directly with me as your launch strategist and team leader to help orchestrate the team to make it all come to life.
These are just a few of my team you can work with.
training/images/045-revenue-value-ladder-rgb%20-%20Copy.jpg" alt="" width="1245" height="1242" />
How big do you want to go?
How fast do you want to grow?
Where do you fall on the DIY to done-for-you spectrum?
After you have thought about the three questions above, Schedule a Meeting with my Zoom meeting calendar integration at a time that works for you or email me at joshua@anomalystudios.com.
If you would like to invite others to the Zoom meeting simply add them (shown below). Everyone can align their calendars with the invites that are sent out from my Calendar Hero integrations.

Keywords: Anomaly AMP, digital engagement, online presence, content management, marketing funnels, evergreen content, user experience, scalability, healthcare industry, education sector, e-commerce, social media integration, customizable templates
Welcome to AMP
training/anomaly-amp-welcome-page-module-1.jpg" alt="Module 1 Image" width="1200" height="150" />
Here are the top things you will be able to do when you complete this module.
Introduction to AMP
training/anomaly-amp-welcome-page-module-2.jpg" alt="Module 2 Image" width="1200" height="150" />
Here are the top things you will be able to do when you complete this module.
Core Components
training/anomaly-amp-welcome-page-module-3.jpg" alt="AMP Module 3" width="1200" height="150" />
training/anomaly-amp-welcome-page-module-4.jpg" alt="Module 4 Overview" width="1200" height="150" />
Coming Soon
training/anomaly-amp-welcome-page-module-5.jpg" alt="Module 5 Overview" width="1200" height="150" />
Coming Soon
Master Script Framework
After completing this video, viewers will be able to develop a master script framework for producing a full year of unique, inspirational Instagram reels. They will learn how to batch-create scripts, add descriptions and hashtags, and spot-check content for quality and consistency. The video guides users through automating the content creation process, organizing everything in one place, and preparing for efficient scheduling and posting. By following these steps, viewers can streamline their social media production and ensure their messaging remains engaging and on-brand.
Here are the key things you will be able to do after you watch this demo:
Develop a master script framework for content creation
Generate and batch unique weekly scripts
Spot-check and refine content for quality and consistency
Automate the production and organization of social media assets
Schedule and prepare posts for efficient publishing
Integrate descriptions and hashtags for each script
Critique and adjust content to maintain brand messaging
Developing the Master Script Framework 0:09
Josh Lomelino explains the importance of developing a master script framework to allow AI to rapidly produce content.
The sequence is a 52-week sequence with the final outcome being Instagram reels, which will also be used for ads.
The feedback loop will be used to create Canvas slides for simple posts on Facebook and Instagram.
Josh will be working on a startup product that is in its beginning stages, creating content from scratch.
Initial Tests and Experimentation 1:47
Josh shows a demo of initial tests to figure out how to proceed with social media content.
The first test involves a digital spokesperson promoting a game, with different voices and accents.
Josh emphasizes the need for 52 weeks of content and trains ChatGPT on the product.
The process involves starting with a brand new chat and training ChatGPT on the game description.
training ChatGPT and Generating Scripts 5:26
Josh begins training ChatGPT by asking for a 15-second script for an Instagram reel.
The goal is to generate a sequence of 52 video scripts for Instagram reels.
Josh requests ChatGPT to provide a list of titles thematically broken down for each week.
The focus is on creating thought-provoking content that generates interest and shareability.
Refining the Scripts and Thematic Breakdown 9:09
Josh continues to refine the scripts, ensuring they are thematically broken down.
The process involves saving snapshots of the training process to keep the framework on track.
Josh emphasizes the importance of critiquing and providing feedback to steer the model in the right direction.
The goal is to create a strong starting point for the rest of the social media calendar.
Finalizing the Scripts and Automating Production 16:18
Josh finalizes the first script as a test and generates the video in 4K.
The process involves duplicating the project, pasting the script, and using different voices.
Josh demonstrates how to generate all 52 weeks of scripts, ensuring each week is unique and inspirational.
The final step involves copying and pasting the scripts into a Google Doc for easy management and scheduling.
Managing the Social Media Calendar 24:44
Josh explains the importance of having Instagram descriptions and hashtags for each script.
The process involves saving the framework and ensuring all future scripts follow the combined format.
Josh spot checks the scripts to ensure they stay on track and provide feedback as needed.
The goal is to have everything ready for scheduling and posting on social media platforms.
Setting Up the Video Production Pipeline 32:43
Josh outlines the next steps for setting up the video production pipeline.
This includes selecting voices, actors, and actresses for the brand.
The process involves using Showbiz to produce each of the 52 videos.
Josh emphasizes the importance of having a fast and easy pipeline for production.
Finalizing the Year's Worth of Content 41:53
Josh continues to batch produce the remaining scripts, ensuring they are unique and inspirational.
The process involves spot checking the scripts and providing feedback to keep the model on track.
Josh demonstrates how to manage the entire year's worth of content in one spot.
The final step involves generating the entire year's content and ensuring it is ready for scheduling and posting.
Anomaly AMP Channel Partner Starting
Josh Lomelino is the founder of Anomaly Studios and creator of the Anomaly AMP platform. He previously ran a service-based business but struggled with the constant fight for survival and lack of control over his time and finances. Anomaly AMP is a platform he built to automate and scale his business, providing a way to earn recurring passive. Now he's opening up that same passive income power available to through an affiliate program that offers commissions on referred customers simply by sharing our free resources.
After you have watched the video above, create your account for the channel partner program at the bottom of this page or by clicking here.
To create financial projections go to our revenue calculator and click “recurring.” Then enter $99.4 for 20% of the Anomaly A.M.P. subscription and enter 1 customer per month (or the number you think you can get). As you enter customers per month for onboarding the financial information appears below as monthly, annually, and cumulative across years if you keep up with the cadence of one customer per month on average. This compounds exponentially.
Launch the Calculator and click the recurring tab.
One Customer Per Month Example
training/images/affiliate-program/calculator-one-customer-per-month-data-entry.png" alt="" width="754" height="288" />
training/images/affiliate-program/calculator-one-customer-per-month.png" alt="" width="1030" height="307" />
Five Customers Per Month Example
training/images/affiliate-program/calculator-five-customers-per-month-data-entry.png" alt="" width="643" height="277" />
training/images/affiliate-program/calculator-five-customers-per-month.png" alt="" width="1030" height="316" />
In this video, Josh Lomelino demonstrates how to create an AI-powered digital voice replica using 11 Labs, enabling content creators to rapidly generate high-quality audio and video content at scale. By training the system with a consistent audio sample, users can produce automated voice performances that sound like their own, allowing them to create lectures, demos, and other content quickly and efficiently. The method involves uploading 1-3 hours of controlled audio recordings, fine-tuning voice settings, and integrating with platforms like HeyGen to automate video production. After watching this tutorial, viewers will be able to develop their own AI voice clone, streamline content creation, and overcome time constraints by generating multiple scripts and videos with minimal manual effort.
In this video, Josh Lomelino demonstrates how to create an AI-powered digital voice replica using 11 Labs, enabling content creators to rapidly generate high-quality audio and video content at scale. By training the system with a consistent audio sample, users can produce automated voice performances that sound like their own, allowing them to create lectures, demos, and other content quickly and efficiently. The method involves uploading 1-3 hours of controlled audio recordings, fine-tuning voice settings, and integrating with platforms like HeyGen to automate video production. After watching this tutorial, viewers will be able to develop their own AI voice clone, streamline content creation, and overcome time constraints by generating multiple scripts and videos with minimal manual effort.
Here are the key things you will be able to do after you watch this demo:
Train an AI voice synthesis system using personal audio recordings
Generate consistent voice replicas with controlled audio samples
Optimize AI-generated voice settings for natural-sounding output
Integrate voice cloning technology with video production platforms
Create automated content at scale using text-to-speech technologies
Manage AI voice generation credits efficiently
Export and store audio files in multiple formats for different applications
Prototype and refine scripts using AI voice technology
Develop a workflow for rapid content creation across lectures, demos, and presentations
Leverage AI tools to overcome time constraints in content production
Creating a Voice Replica Using AI 0:09
Josh Lomelino discusses the use of AI-powered voice synthesis to create a voice replica, emphasizing the challenge of matching human recordings.
He highlights the effectiveness of using text prompts to quickly prototype, test, and revise scripts or generate finished audio files.
Josh mentions his preference for the 11 labs tool, which offers a studio mode for producing longer form audio tracks.
He shares his initial struggles with the tool and how contacting their support provided helpful suggestions.
training the System for Consistent Output 1:24
Josh explains the importance of training the system with a consistent audio sample to avoid unnatural variations in volume and tone.
He describes his initial mistake of using diverse recordings from different sessions, which led to inconsistent results.
Josh emphasizes the need for a controlled environment with a single, consistent audio sample for better results.
He plans to demonstrate the settings that produce the best results for replicating his voice in the user interface.
Optimizing Generated Audio Files 2:56
Josh advises generating audio sparingly to avoid exhausting monthly credits and recommends starting with smaller sections of text.
He explains the process of refining the output and generating both wave and mp3 audio files for different applications.
Josh mentions the importance of storing both wave and mp3 files for secure storage and project organization.
He notes that it may take several attempts to develop a method that works well for the user.
Exporting and Integrating Audio Files 4:19
Josh describes two methods for uploading audio files to virtual avatars: exporting both wave and mp3 versions or integrating the 11 labs API directly with Hey Gen.
He prefers using the wave audio file for higher quality and to avoid double compression but acknowledges the need to export the mp3 format for larger tracks.
Josh explains the integration of the 11 labs API with Hey Gen, which allows for rapid development of prototypes and large volumes of content.
He mentions the need to break up scripts into manageable sections for efficient processing by the software.
Automating Video Production with AI 6:02
Josh discusses the ability to produce videos at scale by automating both audio and video avatars from text.
He highlights the productivity gains from using AI to generate video scripts and produce audio and video automatically.
Josh notes the cost of AI-generated voice and the strategy of using high-quality audio only when necessary.
He explains the use of draft versions of scripts with Hey Gen's voice replica to refine the script without incurring additional costs.
Finalizing and Exporting Scripts 8:04
Josh describes the process of finalizing scripts and either reading and recording them manually or using the 11 labs integration within Hey Gen.
He mentions the use of a side-by-side display setup with a Google document and video avatar performance for quick edits.
Josh emphasizes the usefulness of this method for high-end projects that require detailed polishing and iteration.
He concludes the demo by encouraging the use of digital voice replicas to scale beyond time constraints and improve productivity.
Keywords: automation,content,creation,production,studio,digital,doubles,video,avatar,text,script,cloud-based,tools,slide,decks,PowerPoint,Canva,training,programs,staff,development,retention,coding,Academy,method,four
Method Four of the Ultimate Content Creation Workflow enables creators to automate their entire video production process by leveraging cloud-based tools and digital technology. By mastering this method, content creators can clone their voice, generate video avatars, and produce high-quality training videos and presentations with minimal time and effort. The workflow allows you to transform a simple text script into a fully automated video production, complete with synchronized audio, visuals, and slide decks. Ultimately, this approach empowers busy professionals to scale their content creation without being constrained by traditional time-consuming production methods.
Method Four of the Ultimate Content Creation Workflow enables creators to automate their entire video production process by leveraging cloud-based tools and digital technology. By mastering this method, content creators can clone their voice, generate video avatars, and produce high-quality training videos and presentations with minimal time and effort. The workflow allows you to transform a simple text script into a fully automated video production, complete with synchronized audio, visuals, and slide decks. Ultimately, this approach empowers busy professionals to scale their content creation without being constrained by traditional time-consuming production methods.
Here are the key things you will be able to do after you watch this demo:
Clone your voice for digital content creation
Generate automated video avatars
Transform text scripts into complete video presentations
Automate slide deck production in PowerPoint and Canva
Scale content creation with minimal time investment
Develop training materials efficiently
Leverage cloud-based production tools
Create digital doubles of yourself
Streamline video production workflows
Produce high-quality educational content without extensive technical skills
Ultimate Content Creation Workflow Overview 0:08
Josh Lomelino introduces method four, which automates the entire content creation process.
This method combines the first three methods but focuses on automation, making it more efficient.
Josh emphasizes the importance of mastering the first three methods before attempting method four.
The method allows for the creation of high-quality content with minimal time, effort, and budget.
Method Four's Impact on Production 1:09
Josh describes the transformative power of method four, which revolutionized his production process.
A potential customer expressed interest in using the method for staff development and retention.
Josh explains how he creates digital doubles of himself to automate the production process.
The method enables large-scale production without the time constraints typically associated with video creation.
Addressing Time Constraints in Content Creation 1:49
Josh shares experiences of customers who face time constraints in creating training programs and classes.
He highlights the challenges of maintaining a busy schedule while keeping up with production demands.
Method four allows for the cloning of voices and creation of audio tracks to generate video avatars.
The method significantly reduces the time required to produce multiple videos.
Automation Capabilities of Method Four 2:29
Josh explains that everything in the final video is fully automated, starting from a text script.
The process involves copying and pasting the script into cloud-based production tools.
High-end computers are not necessary as most of the heavy lifting is done in the cloud.
The method also automates the creation of slide decks in tools like PowerPoint or Canva.
Step-by-Step Process Walkthrough 2:48
Josh mentions that he will walk through each part of the process in the following sections.
The detailed steps will provide a comprehensive understanding of method four.
The process aims to make content creation more efficient and less time-consuming.
Josh emphasizes the importance of understanding each step to effectively implement the method.
Keywords: Digital, doubles, AI, tools, lighting, image, quality, training, model, green
In this tutorial, Josh guides viewers through creating high-quality digital doubles using AI technology. By following his detailed workflow, users will learn how to record themselves with optimal lighting, camera angles, and techniques to capture natural movements. The process involves creating multiple avatar variations with a consistent naming system, allowing for seamless video production and editing. After completing the tutorial, viewers will be able to generate professional, versatile digital avatars that can be used across different video projects with ease and consistency.
In this tutorial, Josh guides viewers through creating high-quality digital doubles using AI technology. By following his detailed workflow, users will learn how to record themselves with optimal lighting, camera angles, and techniques to capture natural movements. The process involves creating multiple avatar variations with a consistent naming system, allowing for seamless video production and editing. After completing the tutorial, viewers will be able to generate professional, versatile digital avatars that can be used across different video projects with ease and consistency.
Following are the key things you will be able to do after you watch this demo:
Create multiple avatar variations with a consistent naming system
Record high-quality source footage for AI digital double training
Select optimal recording environments (green screen or natural settings)
Capture multiple camera angles for flexible video production
Apply three-point lighting techniques for professional video quality
Use camera settings to record in 4K resolution
Develop a systematic approach to avatar creation and management
Experiment with different avatar styles and gestures
Optimize video recording for AI digital double learning
Implement a multi-camera editing workflow for seamless avatar transitions
Building Digital Doubles from Scratch 0:08
Josh Lomelino explains the importance of following earlier steps, especially around lighting and image quality, to avoid costly post-production fixes.
He emphasizes the need for a two-minute video of oneself speaking directly to the camera, suggesting the use of a wireless mouse for discreet recording.
Josh prefers recording against a green screen for flexibility in background changes, but acknowledges the natural setting option.
He recommends experimenting with different avatars, using a consistent numbering system for organization, and provides examples of naming conventions for avatar variations.
Creating and Managing Avatars 3:19
Josh discusses the importance of capturing as many versions as possible for each outfit in one session to ensure consistency in hair, lighting, and clothing.
He explains his approach to recording multiple shots or angles simultaneously using different camera angles and a multi-cam edit in video editing software.
The three essential angles he always records are a close-up, a medium shot, and a three-quarter side view.
Josh mentions the challenges some AI tools pose with the three-quarter view but recommends capturing it for added realism and variety.
Recording and Equipment Considerations 4:43
Josh advises using a Logitech 4K webcam for better image quality, though a 1080p camera can also yield decent results.
He shares his experience with different recording devices, including a phone's rear-facing camera in 4K, a webcam, and a DSLR, and emphasizes the need for experimentation.
Josh recommends using the built-in Windows or Mac camera app for recording at the highest resolution possible, with instructions on adjusting settings to force 4K recording.
He advises recording a clip without the green screen, looking straight into the camera, and speaking casually to ensure the digital double learns natural behavior.
Batch Creating Avatars 6:07
Josh introduces a workflow in his video editing software for batch creating avatars, which speeds up the process.
He mentions the importance of recording a clip that is at least two minutes long to avoid issues with awkward movements being mimicked by the avatar.
Josh explains his setup for recording, including using an adjustable camera arm mounted to his desk for flexibility.
He concludes the demo by stating that he will cover more in the next video, indicating the end of the current session.
Keywords: Green screen, virtual avatar, training video, RGB, Ultra Key
In this tutorial, Josh demonstrates how to create a versatile virtual avatar using a green screen background. By following his step-by-step process, viewers will learn to record a training video, use video editing software to remove the background, and export a high-quality 4K file for avatar creation. The technique allows users to generate a digital double that can be placed on any background, enabling them to create numerous training videos, presentations, and lectures without being physically present. Ultimately, viewers will gain the skills to produce an AI avatar that can work continuously, freeing up their personal time while maintaining professional content production.
In this tutorial, Josh demonstrates how to create a versatile virtual avatar using a green screen background. By following his step-by-step process, viewers will learn to record a training video, use video editing software to remove the background, and export a high-quality 4K file for avatar creation. The technique allows users to generate a digital double that can be placed on any background, enabling them to create numerous training videos, presentations, and lectures without being physically present. Ultimately, viewers will gain the skills to produce an AI avatar that can work continuously, freeing up their personal time while maintaining professional content production.
Following are the key things you will be able to do after you watch this demo:
Shoot a training video using a green screen background
Apply the ultra key filter in video editing software
Create a 100% green color matte
Remove background elements from video footage
Export high-quality 4K video files
Generate a virtual avatar using AI software
Render digital doubles for multiple presentations
Layer virtual avatars over different backgrounds
Integrate avatar presentations with PowerPoint and Canva slides
Produce training content without physical studio time
Creating a Virtual Avatar with a Green Screen Background 0:08
Josh Lomelino explains the importance of using a green screen background for creating virtual avatars, emphasizing versatility and ease of use.
He describes the general principle of achieving a 100% green background in the RGB model, noting the difficulty of achieving perfect green.
Josh introduces simple steps to help with the process, including shooting a two-minute training video on a green screen and using 100% green shapes in video editing software.
He demonstrates the use of the ultra key filter in video editing software to eliminate the background and adjust settings like feathering, key color, and matte cleanup.
Setting Up the Green Screen Workflow 5:18
Josh explains the creation of a 100% green color matte in video editing software, specifying the width and height to be 4k.
He describes layering the green clip underneath the video track and extending it to the same length as the training clip.
Josh mentions the importance of placing additional green color mats to fix any spillover areas and avoid relying solely on the ultra key effect.
He outlines the process of setting in and out points, exporting the clip as an MP4 file, and using Adobe Media Encoder for batch rendering.
Exporting and Adjusting Settings 8:12
Josh details the export settings, including using the h264 codec for high quality and specifying the file type as MP4.
He emphasizes the importance of evenly lighting the green screen for a better key and mentions common issues like wrinkles and folds.
Josh shows how to create a new avatar in Hey Gen or other virtual avatar software, validating the model by reading a code aloud.
He explains the process of uploading source material, validating the camera angle, and retaining 4k footage for higher resolution renders.
Using the Virtual Avatar in Various Productions 11:27
Josh discusses the flexibility of using the virtual avatar in presentations, lectures, and demos, including mixing with PowerPoint slides and Canvas slides.
He highlights the ability to create unlimited digital doubles and the importance of not checking the AI remove background option.
Josh explains the use of Camtasia's Remove Color effect to key out the green color in the background and the importance of using high-quality settings.
He advises against using proxy footage for making decisions about green screen settings and emphasizes the need for maximum quality settings in video editing software.
Final Steps and Infinite Possibilities 14:54
Josh concludes by mentioning the infinite possibilities of the workflow, including creating presentations directly inside Hey Gen.
He discusses integrating with Canva for timed slide changes and animations, and the option to check the background removal button for a transparent background.
Josh reiterates the importance of using the method shown in the video to achieve 4k production quality, even if it requires a more expensive plan.
He wraps up the demo, encouraging viewers to explore the various applications and approaches for their virtual avatars.
Keywords: batch, avatar, digital-double, production, lighting, setup, color, correction, video, editing, project, HeyGen, encoder
In this tutorial, Josh Lomelino demonstrates a comprehensive workflow for efficiently batch producing multiple virtual avatars with consistent lighting and color quality. Viewers will learn how to set up precise video editing project settings, create a master sequence with multiple camera angles, and use Adobe Media Encoder to render individual clips for avatar training. The technique allows content creators to scale their avatar production, quickly export multiple versions of their digital doubles, and maintain a well-organized project structure that enables future edits and refinements. By following this method, users can streamline their avatar creation process, saving significant time and producing high-quality, professional virtual representations.
In this tutorial, Josh Lomelino demonstrates a comprehensive workflow for efficiently batch producing multiple virtual avatars with consistent lighting and color quality. Viewers will learn how to set up precise video editing project settings, create a master sequence with multiple camera angles, and use Adobe Media Encoder to render individual clips for avatar training. The technique allows content creators to scale their avatar production, quickly export multiple versions of their digital doubles, and maintain a well-organized project structure that enables future edits and refinements. By following this method, users can streamline their avatar creation process, saving significant time and producing high-quality, professional virtual representations.
Following are the key things you will be able to do after you watch this demo:
Configure video editing project settings to match camera specifications
Create a systematic numbering and organization system for avatar sequences
Set up multiple camera angles within a single project
Use Adobe Media Encoder to batch render avatar clips
Export individual video files for virtual avatar training
Implement color correction and LUT modifications across multiple clips
Organize project files for efficient content production
Develop a scalable workflow for mass avatar creation
Troubleshoot and remove performance anomalies in avatar recordings
Back up and preserve digital asset production files
Setting Up Lighting and Color Values 0:08
Josh Lomelino explains the importance of setting up lighting and color values once to achieve consistent results over time.
He emphasizes the need to test lighting and color values before batch producing a group of avatars.
Josh mentions the flexibility to make further adjustments later using L, U, T color modifications or color correction tools.
The workflow allows for the efficient production of 10 to 50 avatars, ensuring visual polish from the start.
Consistency in Project Settings 1:42
Josh highlights the necessity of matching video editing project settings to the specifications of the recording camera.
He provides an example of setting up a project for a Logitech 4k camera and ensuring consistency in frame size and frame rate.
Josh advises checking file properties to extract frame size and frame rate if unsure.
Consistency in project settings is crucial for mass producing different clips.
Creating a Master Sequence 2:59
Josh sets up a master sequence to serve as a template for duplicating sequences as needed.
He uses a clear numbering system for sequences, labeling each avatar with a specific outfit and camera angle.
Examples include Avatar 001, DIRECT address, no hands, and Avatar 0013, quarter view.
Josh organizes sequences in a dedicated folder called a bin for project organization.
Batch Rendering with Adobe Media Encoder 4:56
Josh explains the process of adding clips to a Batch Render Queue using Adobe Media Encoder.
He selects in and out points for each camera angle, creating dedicated files for each angle.
Josh configures the encoder to render only the specified in and out range on the timeline.
Each camera angle should be exported as an individual MP4 file, specifying the folder location and file name.
Finalizing and Organizing Project Files 6:40
Josh emphasizes the importance of organizing project files, including original source files, rendered clips, and project files.
He advises saving the video editing project frequently as a fail-safe for future edits.
Josh highlights the need to review source footage for any performance anomalies and correct them.
The workflow allows for the removal of outdated avatars and recreation without problematic movements.
Backing Up and Scaling Content Production 8:25
Josh frequently backs up his entire project folder by compressing it into a zip file for disaster recovery.
He mentions the time investment upfront to create polished assets and resolve hiccups.
Josh advises starting with manual methods and gradually scaling to more advanced techniques.
The well-organized project structure saves time, enables content production scaling, and supports high-performance results.
Keywords: Automated, performance, text, video, Otter, AI, voice, clone, Eleven Labs, HeyGen, audio, multilingual
In this video, Josh demonstrates how to create fully automated video performances directly from text using tools like Otter AI, 11 Labs, and HeyGen. Viewers will learn how to generate high-quality voice clones, prototype video scripts, and produce professional-looking content with minimal effort by leveraging AI-powered voice and video generation technologies. The workflow allows content creators to transform written or spoken text into polished video presentations quickly and efficiently. By following Josh's method, users can generate multiple video iterations, edit audio precisely, and create digital avatars that replicate their voice and performance with remarkable accuracy.
In this video, Josh demonstrates how to create fully automated video performances directly from text using tools like Otter AI, 11 Labs, and HeyGen. Viewers will learn how to generate high-quality voice clones, prototype video scripts, and produce professional-looking content with minimal effort by leveraging AI-powered voice and video generation technologies. The workflow allows content creators to transform written or spoken text into polished video presentations quickly and efficiently. By following Josh's method, users can generate multiple video iterations, edit audio precisely, and create digital avatars that replicate their voice and performance with remarkable accuracy.
Following are the key things you will be able to do after you watch this demo:
Generate video scripts from transcribed audio using AI tools
Create high-quality voice clones with consistent audio recordings
Prototype video content using free and paid AI platforms
Optimize voice training for digital avatars
Manage content production across multiple AI environments
Edit audio tracks with minimal credit consumption
Develop a systematic workflow for automated video creation
Replicate personal performance using digital voice technology
Transform text-based content into professional video presentations
Implement cost-effective strategies for video and audio generation
Creating a Fully Automated Performance from Text 0:08
Josh Lomelino explains the process of creating a fully automated performance directly from text, including generating audio prompts using Otter AI.
He describes how he brainstorms ideas while walking and exports the subtitle transcript file, SRT, to process it with AI tools like Claude or ChatGPT.
Josh mentions breaking up long scripts into manageable blocks of 1800 characters and generating a year's worth of content for various platforms.
He emphasizes the use of text, whether written manually or spoken and transcribed, to craft a video script using two primary methods.
Generating High-Quality Voice Clones 1:51
Josh discusses creating a high-quality voice clone using 11 Labs, initially finding the results artificial but later perfecting the settings.
He highlights the importance of using a consistent audio clip for training the voice digital double, ideally around three hours of spoken audio.
Josh explains the challenges of recording consistently for three hours and how he stitches together previous demo recordings to create a large audio clip.
He stresses the need for meticulous tracking of audio settings to ensure uniformity and avoid sudden changes in volume or tonal quality.
Optimizing Audio Recording for Consistency 3:36
Josh shares his experience of recording multiple live sessions with an audience, which infused the audio with personality and energy.
He explains the importance of having consistently dialed-in audio for generating a high-quality performance, as the AI listens to everything in the audio track.
Josh mentions the time and cost involved in using 11 Labs, which can take up to six to eight hours to analyze a voice and build a model.
He advises against using cheaper models, such as the multilingual version one model or turbo 2.5, and recommends upgrading to the multilingual version two model for better results.
Using Hey Gen for Cost-Effective Prototyping 5:35
Josh introduces Hey Gen as an alternative for creating generative content when 11 Labs burns through credits too quickly.
He explains how he trains Hey Gen on his voice by uploading a 10 to 15-minute audio clip and generates unlimited videos for free, depending on the subscription plan.
Josh describes the process of creating prototypes, making real-time adjustments to the script, and rendering multiple takes.
He mentions using his phone in split screen mode while walking to make adjustments on the fly and then copying and pasting the revised script into Hey Gen.
Switching Between Hey Gen and 11 Labs 7:44
Josh explains how he can switch the voice in Hey Gen to the high-quality production voice in 11 Labs with a click of a button.
He highlights the downside of using Hey Gen, which is the risk of losing all credits if there are issues with the audio track in the final video.
Josh prefers using the Studio tool in 11 Labs for targeted editing, which allows regenerating just portions of the audio without redoing the entire clip.
He mentions the benefit of being able to download the WAV file and MP3 file from the Studio tool in 11 Labs as a fail-safe.
Organizing Video Production Phases 9:21
Josh describes his workflow of treating production as two phases: the cheap, free voice phase and the final phase.
He explains the process of pasting the text directly into the Hey Gen editor, listening to the prototype, and resolving issues before creating a new file in Hey Gen.
Josh organizes his videos into two folders: a prototype folder and a final folder, for easy organization of his methods.
He mentions using the multilingual version two model for cost-effective throwaway tests and training his voice with Hey Gen for free prototyping.
Leveraging Digital Doubles for High-Quality Videos 10:34
Josh shares how he uses his digital doubles to replicate a performance of his voice and generate a corresponding video composite.
He explains how he creates a script using Otter AI during a walk, copies and pastes it into his automated workflow, and produces a high-end video with minimal effort.
Josh highlights the benefits of this workflow, which allows him to deliver excellence without skipping a beat, even when small inconsistencies would have derailed the process before.
He concludes by mentioning the next steps in the following videos, which will cover adding automated visual elements on screen behind the virtual avatar.
Keywords: AI, transcription, video, Bloom's Taxonomy, metadata, learner outcomes, content, table, contents, time, codes, interactive chapters, prompts
Learn how to transform lengthy video content into easily digestible, learner-friendly resources using AI technology. This tutorial demonstrates how to automatically generate comprehensive text information including descriptions, educational outcomes, and detailed summaries directly from video transcripts. By utilizing tools like Otter AI and Anomaly Amp, you'll discover a streamlined method to create navigation cues, time-coded summaries, and interactive chapters that enhance viewer understanding and engagement. The process requires minimal manual effort while providing maximum value for learners seeking to quickly grasp the key points of extended video content.
Learn how to transform lengthy video content into easily digestible, learner-friendly resources using AI technology. This tutorial demonstrates how to automatically generate comprehensive text information including descriptions, educational outcomes, and detailed summaries directly from video transcripts. By utilizing tools like Otter AI and Anomaly Amp, you'll discover a streamlined method to create navigation cues, time-coded summaries, and interactive chapters that enhance viewer understanding and engagement. The process requires minimal manual effort while providing maximum value for learners seeking to quickly grasp the key points of extended video content.
Following are the key things you will be able to do after you watch this demo:
Analyze the process of using AI tools to generate comprehensive video metadata
Generate automated transcripts and summaries using Otter AI
Create detailed video descriptions and educational outcomes with minimal manual effort
Extract key thematic points and time-coded sections from video content
Implement interactive chapters and navigation cues in video presentations
Transform lengthy video demonstrations into learner-friendly, easily navigable resources
Generating Text Information for Video Content 0:09
Josh Lomelino introduces the purpose of the video: to show how to generate text information to support video content.
He explains the challenges of long videos and the time-consuming process of creating a manual table of contents.
Josh suggests using AI to automatically generate contextual and navigation cues for viewers.
He outlines the four main cues for learners: description, outcomes, table of contents, and interactive chapters.
Using Otter AI App for Transcription 1:40
Josh explains the process of using the Otter AI app to generate a transcript of a finished video.
He details the steps of dragging and dropping the video file into the Otter user interface for transcription.
Once the transcription is complete, Josh shows how to access the Summary tab to extract the table of contents.
He emphasizes the importance of the Summary tab in providing thematic breakdowns and time ranges.
Creating Descriptions and Educational Outcomes 3:44
Josh demonstrates how to generate a three to four sentence description using AI prompts in Otter.
He explains the process of copying and pasting the description into the Anomaly Amp system.
Josh highlights the importance of providing a list of educational outcomes for learners.
He shows how to use AI prompts to generate a list of outcomes based on the training script.
Formatting and Organizing Content 4:53
Josh provides tips on formatting the content in the Anomaly Amp system.
He suggests making the time codes appear as text summaries and setting them as heading two (h2) in bold.
Josh explains how to create a clear message under the outcomes heading to guide learners.
He recommends using either a numbered or bulleted list for the outcomes.
Finalizing the Detailed Summary 5:28
Josh completes the detailed summary by including time codes for each item in the video.
He reiterates that the process requires minimal manual work and produces valuable content for learners.
Josh mentions the importance of reviewing training on Bloom's Taxonomy for proper verb usage in AI tools.
He offers supplemental files to help train AI tools to use the correct verbs for the level of learning.
Introduction to Interactive Table of Contents 6:18
Josh announces the next video, which will cover the fourth component: the interactive table of contents.
He explains that this component converts the table of contents into interactive chapters in the video.
Josh highlights the benefits of this feature for users on various devices.
He promises to show the process of creating interactive metadata in the next video.
AI Tools Overview and Links
Otter AI
Otter AI is a powerful transcription and collaboration tool that solves one of the biggest bottlenecks for membership owners and content creators: turning raw ideas and recordings into publish-ready content quickly. Instead of spending hours manually transcribing podcasts, coaching calls, or brainstorming sessions, Otter automatically converts audio into accurate, searchable text that can be repurposed into blog posts, course modules, captions, or marketing emails. For creators juggling multiple platforms and constant content demands, Otter removes the friction of documentation and frees up time to focus on engaging their audience, scaling their community, and generating revenue.
Otter AI Affiliate Link Signup (use this link)
HeyGen
HeyGen is an AI video creation platform that eliminates the need for expensive equipment, on-camera talent, and complex editing—solving a major pain point for membership owners and content creators who need consistent, professional-looking videos to engage their audiences. With HeyGen, you can instantly turn scripts into high-quality talking-head videos using realistic AI avatars, complete with voiceovers and multilingual capabilities. This allows creators to scale their content output, personalize training or marketing messages, and maintain a polished brand presence without the cost or time traditionally required for video production.
HeyGen Affiliate Link Signup (use this link)
ElevenLabs
ElevenLabs is an advanced AI voice generation platform that solves the challenge of producing high-quality, natural-sounding audio for membership owners and content creators without the ongoing need for sitting in a chair and recording your voice over and over. It allows creators to instantly convert written content—like course modules, podcasts, or marketing scripts—into realistic human-like narrations in multiple voices and languages. This not only speeds up content production but also ensures a consistent, professional sound across all audio materials, helping creators deliver a polished experience that builds trust, increases engagement, and scales their content library effortlessly.
ElevenLabs Affiliate Link Signup (use this link)
Discover how to take your app idea from concept to high-fidelity MVP with lightning speed in this hands-on demo! You’ll learn how to organize product requirements, train AI tools using your own user stories, and craft powerful prompts that supercharge no-code and low-code platforms like Lovable and Thunkable. Watch step-by-step as we merge user insights, automate prototype creation, and iterate rapidly to build a functional, customizable app without writing code. Whether you're a founder, designer, or developer, this demo will empower you to launch better products, faster.
Discover how to take your app idea from concept to high-fidelity MVP with lightning speed in this hands-on demo! You’ll learn how to organize product requirements, train AI tools using your own user stories, and craft powerful prompts that supercharge no-code and low-code platforms like Lovable and Thunkable. Watch step-by-step as we merge user insights, automate prototype creation, and iterate rapidly to build a functional, customizable app without writing code. Whether you're a founder, designer, or developer, this demo will empower you to launch better products, faster.
After watching this video, viewers will be able to efficiently structure and document their product ideas, train AI tools with custom user stories and requirements, and generate detailed prompts for building full-featured app prototypes. They'll learn how to merge, organize, and optimize user stories to maximize productivity and reduce costs with AI-driven app builders like Lovable and Thunkable. By following these steps, viewers can rapidly create, customize, and iterate on high-fidelity MVPs, preparing their apps for further refinement and deployment. This workflow empowers users to leverage multiple no-code platforms and streamline their app development from concept to actionable prototype.
Following are the key things you will be able to do after you watch this demo:
Understanding Pricing and Pre-Composing Chats 0:11
Josh Lomelino explains the importance of understanding pricing in AI apps, emphasizing that credits are tied to prompts and chats.
He advises pre-composing chats in tools like ChatGPT to avoid high costs in apps like Lovable, which charge based on daily credits.
Josh demonstrates how to go back to prior steps in ChatGPT to train the system on user stories and features.
He highlights the need to ensure the chat is trained universally across all chats, otherwise, it needs to be asked to do so explicitly.
training and Managing Chats 4:53
Josh discusses the process of training chats on system functionality, using SRT files as an example.
He explains the incremental compounding of work in Lovable, which makes it costly to start chatting without a well-defined prompt.
Josh emphasizes the importance of optimizing the use of credits to avoid high costs, comparing it to the cost of a development team.
He mentions the potential for the browser to choke on large chats and the need to break them into manageable parts.
Merging and Organizing User Stories 7:17
Josh demonstrates how to merge multiple chats to create a faster and more efficient chat.
He explains the process of outputting user stories as a CSV and the challenges with special characters in CSV files.
Josh suggests exporting as an Excel file to fix formatting issues.
He highlights the importance of incrementally building a pipeline to automate the creation of front-end interface screens.
Enhancing User Stories with Features and Acceptance Criteria 9:36
Josh adds a feature column to the user story backlog, differentiating it from user story language.
He includes acceptance criteria, which helps in testing and identifying the area within the app where the feature would exist.
Josh emphasizes the importance of documenting key wins and moments in a Google Doc for future reference.
He explains the process of comparing the current chat output with a saved Word file to ensure completeness.
Creating a Master Prompt for Lovable 17:44
Josh discusses the process of creating a master prompt for Lovable, which includes context, logical structure, explicit instructions, and adaptive considerations.
He highlights the need for granular detail to get specific UI controls in the prompt.
Josh explains the importance of saving the output as a Google Doc or GitHub repository for version control.
He demonstrates how to rewrite the master prompt to include all features in one MVP release.
training Lovable on Documentation 42:48
Josh trains Lovable on the documentation of the tool, which helps in creating a prompt for Lovable.
He explains the process of crawling through the documentation pages and listing the pages learned from.
Josh emphasizes the importance of checking that the AI is actually doing what it claims to do.
He demonstrates how to extract and summarize recommendations from the AI.
Refining and Customizing the App 45:00
Josh refines and customizes the app by adjusting colors and mastering prompting.
He explains the process of using chat mode to plan additional features like a coach and admin portal.
Josh demonstrates how to toggle between different device types to test the app on various form factors.
He highlights the importance of iterating on the app to ensure it meets user needs and pain points.
Exploring Different Tools and Integrations 49:51
Josh explores different tools like Thunkable, Bubble IO, Cursor, Replit, Flutter Flow, and Draftbit.
He explains the process of training the AI on the documentation of these tools to create a single prompt.
Josh highlights the importance of integrating tools like Supabase and Airtable for data management.
He emphasizes the need to experiment with different tools to find the best fit for the project.
Finalizing the MVP and Next Steps 1:04:33
Josh finalizes the MVP by ensuring all features are included in the prompt.
He explains the process of exporting the code base and pushing it to GitHub for further development.
Josh highlights the importance of iterating on the app to ensure it meets user needs and pain points.
He explains the next steps of refining and customizing the app, and preparing it for deployment to the app stores.
Unlock the power of AI to supercharge your product design process! This demo guides you through capturing raw ideas via voice recordings, organizing them into agile user stories with Otter and ChatGPT, and rapidly turning those insights into working app prototypes using Figma Make. You’ll learn to mine your own thoughts for powerful features and pain points, map these to real user needs, and supercharge your workflow with cutting-edge tools. By the end, you’ll be ready to turn any burst of inspiration into design-ready prototypes and actionable development steps.
Unlock the power of AI to supercharge your product design process! This demo guides you through capturing raw ideas via voice recordings, organizing them into agile user stories with Otter and ChatGPT, and rapidly turning those insights into working app prototypes using Figma Make. You’ll learn to mine your own thoughts for powerful features and pain points, map these to real user needs, and supercharge your workflow with cutting-edge tools. By the end, you’ll be ready to turn any burst of inspiration into design-ready prototypes and actionable development steps.
In this video, you'll learn how to transform your brainstorming sessions and unstructured ideas into actionable agile user stories using AI tools and Otter transcription. By following the process demonstrated, you'll discover how to mine your thoughts for key features and pain points, then organize them into structured requirements. Viewers will see how to use these user stories to generate rapid app prototypes with tools like Figma Make and refine them for a real-world project. By the end, you'll have the methods and confidence to turn your random ideas into clear, design-ready prototypes and workflows.
Following are the key things you will be able to do after you watch this demo:
Here is the template you can clone to define your app.
Click here to get the ultimate prompt cheat sheet of every prompt used end to end.
Click here to get the 10 step workflow summary guide and supplemental resources.
AI-Driven Prototype Development Process 0:09
Josh Lomelino explains the process of creating AI-driven prototypes using tools like Figma, Proto.io, and others.
The goal is to create a template that can be integrated into manual prototypes, eventually leading to a full app experience using tools like Lovable or Bubble.
Emphasis on the importance of a clear product definition and agile user stories for successful AI development.
Josh demonstrates how to train a chat on app features and user stories, using his app "Reclaim You" as an example.
training ChatGPT for User Stories 4:30
Josh shows how to train ChatGPT on audio brainstorming sessions using Otter for transcription.
He explains the process of exporting SRT files from Otter and using them as inputs for ChatGPT.
The goal is to capture random thoughts and ideas, which AI can then organize into structured user stories.
Josh demonstrates how to ask ChatGPT to learn from the audio files and generate actionable insights for app features and user stories.
Data Mining and Feature Identification 10:13
Josh discusses the importance of data mining and research to identify core pain points and features for the app.
He shows how to ask ChatGPT to create lists of pain points, issues, and challenges from the data set.
The process involves categorizing pain points into broad buckets like health and wellness, planning and process, motivation and mindset, teaching and engagement.
Josh emphasizes the need for a clear understanding of pain points to develop effective product solutions.
Generating Agile User Stories 17:52
Josh explains how to use ChatGPT to create detailed agile user stories based on the identified pain points.
He demonstrates the process of training ChatGPT on the framework of pain to solution for creating user stories.
The goal is to generate a comprehensive list of user stories that can be used to guide the development of the app.
Josh shows how to create personas for different user groups and generate user stories for each persona.
Prototype Generation with Figma Make 25:43
Josh introduces Figma Make as a tool to generate prototype screens based on the agile user stories.
He explains the process of describing the app in Figma Make, including the app store description and features.
The tool generates HTML code for the prototype screens, which can then be manually refined.
Josh emphasizes the importance of using multiple tools and integrating their outputs to create a comprehensive prototype.
UI Framework and Stencils 35:30
Josh discusses the importance of selecting a UI framework for the final app experience.
He demonstrates how to use UI kits like Bootstrap UI and Material UI to create a consistent UI workflow.
The goal is to ensure that the prototype screens match the final app experience as closely as possible.
Josh shows how to use stencils to quickly create UI elements and save time in the development process.
Reviewing and Refining the Prototype 45:41
Josh explains the importance of reviewing and refining the prototype to ensure it meets the project requirements.
He demonstrates how to identify and fix broken links and other issues in the prototype.
The process involves iterating on the prototype, incorporating feedback, and refining the UI elements.
Josh emphasizes the need for a clear and accurate input to get the best output from AI tools.
Final Steps and Best Practices 46:18
Josh outlines the final steps in the AI-driven prototype development process.
He emphasizes the importance of saving chat history and project documentation for future reference.
The goal is to create a comprehensive and accurate prototype that can be used as a starting point for the final app development.
Josh encourages the use of multiple tools and integrating their outputs to create a robust and functional prototype.
There are no Main Site search results.