Search Phrase = export
Header Images Component Tutorial
In the video above you can use the chapters menu to jump to main chapters of the video or use the time code references below to manually jump to parts of the video. The video also has searchable transcripts in the video player. These features are shown below.
If you are looking for a quick tech demo of how to integrate the Header Image Component simply start at 1:18 in the video demo above and you will get a full breakdown of the essentials in less than two minutes.
Then continue on for the remainder of the demo to get a variety of creative design strategy tips and techniques to help provide a world-class visual experience for your site.
The header image component provides a versatile and visually impactful way to set the tone and context for web page content. This demo will show you how header images can be used in either a fixed width or full browser width layout, allowing for creative flexibility in design.
The technical steps for using the Header Image Component are very simple and straight forward. As such, the primary focus of this demo is to show a variety of creative strategies of you can use image styles to set the tone and mood of your user experience. If you are looking for the technical steps you can jump straight to 23:10 in the video above. You will see the steps completed in just a few clicks.
This demo covers various creative strategies like using blurred images, color saturation, and logo overlays to establish the desired mood and branding. The process for implementing header images was shown to be straightforward, leveraging Photoshop templates to easily size and export assets. The demo emphasizes the importance of coordinating header imagery with body content to create a cohesive user experience. The demo highlights the power of the header image component to elevate the visual design of a website through a simple yet effective implementation.
Header Image Component Overview [0:01]
Josh Lomelino introduces the header image component, emphasizing its optional nature but noting its importance for design aesthetics and consistency.
The header image can be used for various purposes, such as Success Path diagrams, and is flexible across different form factors (mobile, tablet, desktop).
The header image can occupy either a fixed size or full screen width, adapting dynamically to the device's size.
Josh demonstrates how the header image component adjusts its size and position on different devices, including mobile and desktop.
Fixed vs. Full Width Header Images [3:21]
Josh explains the two primary ways to use the header image component: fixed width and full width.
A fixed width image is useful for Success Path diagrams, showing the user's progress through content.
The full width image spans the entire browser width, providing a dynamic and adaptive look.
Josh shows examples of both fixed and full width images, highlighting their respective uses and benefits.
Creative Strategies for Header Images [6:58]
Josh discusses various creative strategies for using header images, including blurred images, color saturation, and logo overlays.
Blurred images can set the tone and texture of the page, while color saturation can enhance the mood of different sections.
Logo overlays can be used to show product or company logos, or sub-brands within an organization.
Photographic images, including cropped photography, can create visual interest and set the stage for the content.
Implementation and exporting Images [10:59]
Josh provides a step-by-step guide on implementing header images, including the best image sizes for full width and fixed width images.
For full width images, the recommended size is 2300 pixels wide by 240 pixels tall.
For fixed width images, the recommended size is around 1448 by 308 pixels.
Josh demonstrates how to export images from Photoshop, ensuring they are the correct size and quality for the header component.
Using Templates and Media Manager [22:49]
Josh explains the use of templates for header images, including full width and fixed width templates.
The templates are structured to allow easy drag and drop of images, with layers for different elements like logos and header images.
Josh shows how to use the media manager to upload and manage images, emphasizing the importance of consistent file organization.
He also discusses the flexibility of using open-source image editing software like GIMP and Procreate.
Coordinating Header and Body Images [36:04]
Josh demonstrates how to coordinate header images with body images to create a unified look and feel.
He explains the process of exporting and uploading images, ensuring they are the correct size and quality.
Josh highlights the importance of file naming conventions to avoid issues with server caching.
He shows how to update and replace images in the media manager, ensuring the new images are correctly integrated into the page.
Creative Freedom and Customization [36:20]
Josh encourages users to explore different creative strategies for header images, including using stock imagery from sites like Unsplash.
He emphasizes the importance of having a clear license for any content used.
Josh demonstrates how to use different effects and adjustment layers in Photoshop to enhance the look of header images.
He shows how to create a visual content brainstorm spreadsheet to plan and organize images for different pages or classes.
Handling Image Caching and Updates [45:00]
Josh explains how to handle issues with image caching, including clearing browser cache or renaming files to force updates.
He demonstrates the process of updating and replacing images in the media manager, ensuring the new images are correctly integrated.
Josh highlights the importance of testing and refreshing the page to ensure the new images are visible.
He provides tips for managing and organizing images in the media manager to maintain consistency and efficiency.
Final Thoughts and Best Practices [49:17]
Josh summarizes the key points of the tutorial, emphasizing the flexibility and creative freedom of the header image component.
He encourages users to explore the examples and templates provided, using them as inspiration for their own designs.
Josh highlights the importance of consistent file organization and proper image sizing for optimal performance.
He concludes with a reminder to always test and refresh the page to ensure new images are correctly displayed.
Automated Video Production Pipeline
This video guides you through setting up an automated video production pipeline, from selecting and testing brand voices using Eleven Labs to pairing them with digital avatars in HeyGen. By following the steps, you'll learn how to catalog and integrate voices, match them with visual characters, and generate preview videos for evaluation. Once you complete the video, you'll be able to efficiently create, test, and organize multiple spokesperson options for your brand's automated content generation. This process empowers you to streamline video production and build a scalable library of branded video assets.
Following are the key things you will be able to do after you watch this demo:
Identify suitable brand voices using generative AI tools.
Catalog and organize voice and avatar options for efficient selection.
Integrate third-party voices into video production platforms.
Pair voices with digital avatars to create compelling spokesperson combinations.
Generate and preview automated video content for evaluation.
Document and track production assets for streamlined workflow.
Select and finalize top spokesperson options for automated content generation.
Introduction to Automated Video Production Pipeline (00:00:00 – 00:00:59)
Josh kicks off the demo by outlining the goal: selecting brand-aligned voices and digital doubles (either your own clone or hired actors), organizing those assets, and laying out the end-to-end steps needed to spin up a fully automated video production pipeline.
Content Sequencing Concept and Cloning (00:00:59 – 00:02:20)
He explains the core idea of building a repeatable sequence of content—cloning a finished production over and over—so you can continually generate new videos by plugging different scripts into the same automated workflow.
Defining Digital Doubles and Voice Types (00:02:20 – 00:03:11)
Josh clarifies terminology (digital twin vs. digital double), walks through the two main “buckets” of voice assets (personality-based clones vs. spokesperson avatars), and discusses how to mix and match them depending on your brand needs.
Selecting Platforms for Generative AI and Deployment (00:03:11 – 00:04:00)
He emphasizes the importance of vetting your generative-AI tools—voice engines and video avatars—and making sure they’re compatible with your target platforms before committing to any given solution.
Brand-Focused Workflow and SRT Utilization (00:04:00 – 00:05:25)
Josh decides to focus on one streamlined method for this demo, using a single SRT transcript file as the “source of truth” for automation—underscoring that a clean, well-formatted SRT is absolute gold when you’re architecting an automated pipeline.
Importing SRT and Leveraging Automation (00:05:25 – 00:07:40)
He shows how to import the SRT into the voice-generation platform, highlighting how the time-coded script drives every subsequent step—from audio rendering to scene assembly.
Setting Up Voice Design in ElevenLabs (00:07:40 – 00:11:49)
A step-by-step walkthrough of testing voice presets, tweaking text lengths, integrating third-party voices, and crafting voice-design prompts to nail down the exact tone and style you want.
Managing Credits and Reviewing Generated Audio (00:11:49 – 00:15:46)
Josh demonstrates how to monitor and conserve your generation credits, preview the rendered audio, swap out placeholder text, and ensure you’re only spending resources on polished clips.
Applying Voiceover and Text Overlays to Video (00:15:46 – 00:19:08)
He attaches the finalized voice track to the video timeline, adds and styles text overlays (centering, contrast adjustments), and assembles the basic video composition ready for export.
Enhancing Prompts with AI Tools for Voice Design (00:19:08 – 00:22:04)
Introduces additional AI utilities for brainstorming and refining your voice-design prompts—showing how to iterate until you get a sample that truly matches your brand voice.
API Key Handling and Asset export Configuration (00:22:04 – 00:27:28)
A practical guide on securely copying your ElevenLabs API key, configuring export settings (e.g., 4K output), and organizing all generated files into branded folders for easy access.
Frame Rate Considerations and Quality Checks (00:27:28 – 00:31:42)
Notes the default 25 fps setting, explains how frame rate impacts perceived motion, and walks through checking your export quality to avoid any unexpected artifacts.
Avatar Adjustments, Project Naming, and Fallbacks (00:31:42 – 01:05:16)
Josh covers fine-tuning avatar scale and positioning, updating project names for consistency, and setting up fallback workflows if you need to swap voices or visuals mid-pipeline.
Avatar Replacement and Cataloging (00:31:42 – 00:34:06)
Pair your chosen voice with visuals by replacing the default avatar, browsing through the 21 “looks” in each category, using the snipping tool to capture promising thumbnails, and logging each candidate’s name and category in your tracking spreadsheet.
Avatar Testing and Video Formatting (00:34:07 – 00:36:24)
Brainstorm voice–visual combinations (e.g. “August”), select a portrait-mode avatar, preview the static image, upload any custom avatars into the pipeline, drag your source video beneath the avatar layer, and confirm the composition and framing.
Voice-Avatar Sync and Quality Comparison (00:36:24 – 00:37:39)
Generate audio samples to compare HeyGen vs. ElevenLabs quality, force-refresh the clip to confirm it’s using the intended voice (e.g. Ryan Kirk), and watch for the spinning indicator to verify successful render.
Preview Generation and File Labeling (00:38:10 – 00:39:11)
Render a 4K preview of the voice-avatar pairing, then label the export asset with your convention (e.g. 001_RyanKirk_CharlieAvatar) so each test remains organized and easily identifiable.
Pipeline Duplication for Variant Testing (00:39:11 – 00:41:15)
Duplicate the entire sequence to create “Test 002,” swap in a new avatar (such as Colton), explore lifestyle/UGC categories, and note how background removal and frame size affect the final look.
Background Removal and Frame Adjustments (00:41:15 – 00:42:32)
Apply the background-remover tool to avatars with built-in backgrounds, observe any cut-offs (like arms being cropped), tweak the canvas framing, and decide between static vs. transparent backgrounds based on brand needs.
Third-Party Voice Integration Workflow (00:42:32 – 00:44:03)
In the “My Voices” tab, toggle on integrated voices (e.g. Charlie), heart your favorites so they surface first, preview each sample, and ensure the API integration is active before proceeding.
Voice Audition Labeling and Mood Board Documentation (00:44:03 – 00:47:09)
Name each audition (e.g. 002_CharlieAvatar), update your mood board with snipped thumbnails, record which browser tab or category each came from, and keep this documentation up to date for reproducibility.
Frame Rate and Credit Management (00:47:09 – 00:48:06)
Note the default 25 fps setting—mismatches can cause audio sync issues—toggle off “Avatar 4” if you’re on an unlimited plan, and monitor your generation credits to avoid unexpected limits.
Styling and Folder Organization (00:48:06 – 00:49:29)
Adjust text overlay colors to maintain contrast (match your brand palette), create new folders for each batch, and standardize your output directory structure so you know exactly where each rendered clip lives.
Option Preview and Cataloging Workflow (00:49:30 – 00:55:51)
Refresh thumbnails, scroll through voice-avatar combos, assign option numbers, screenshot grids of candidates, and log each pairing’s status (“Yes,” “Maybe,” “No”) in your spreadsheet.
Iteration Process and Consistency Notes (00:55:51 – 00:57:23)
Always regenerate every variation (never reuse stale renders), note any limitations (e.g. animated text can cover on-screen elements), and keep your naming and documentation consistent so the pipeline remains bullet-proof.
Ranking Options and Visual Separators (00:57:24 – 01:02:40)
Introduce visual separators in your catalog (e.g. blank rows), rank the top voice-avatar combos, screenshot your “definite yes” list, and preserve those as templates for future batches.
Additional Voice Integration: Amelia (01:02:40 – 01:04:33)
Search for “Amelia” in your voice library, verify whether it’s built-in or needs third-party integration, add it to favorites, preview the sample, and record its ID for consistent reuse.
Final Voice Candidate Integration (01:04:33 – 01:05:16)
Confirm Amelia’s render, then search for any last candidates (e.g. “Analore”), heart and test them, catalog the results, and ensure each new voice is fully integrated into the pipeline.
Final Pipeline Recap and Scale Duplication (01:07:40 – 01:08:34)
Recap how you’ve selected your final set of voices and avatars, finalize your naming conventions, and highlight that you can now duplicate this entire automated workflow to churn out an endless library of on-brand social-media videos.
Image Slider Component Demo
This video provides a comprehensive guide on how to set up and optimize image sliders for websites with a focus on mobile-first design. Viewers will learn how to leverage pre-designed slider templates, properly size and export slider images, and integrate the sliders into a content management system (CMS) while ensuring optimal responsiveness across different devices and form factors. By following the steps demonstrated, users will gain the skills to create high-quality, mobile-friendly image sliders that provide an engaging and seamless experience for their website visitors.
1172 pixels wide by 580 pixels tall (1172 x 580) are the best dimensions for the billions of devices on the market. That's what we have found is ideal and will work everywhere. With the template linked on this page in the supplemental resources (also shown in this demo) you can drag and drop images into sliders. Or you can make your very own images from scratch using the concepts shown in the demo.
Understand the importance of mobile-first design and responsive layout considerations when setting up image sliders on a website.
Identify the safe zones and optimal image dimensions for creating mobile-friendly sliders that avoid text and content cutoff.
Utilize developer tools to test and analyze the responsiveness of image sliders across different device form factors and orientations.
Access and leverage pre-designed slider templates to quickly create high-quality, mobile-optimized sliders.
Effectively edit, export, and optimize slider images for web performance, ensuring fast loading times and minimal bandwidth consumption.
Integrate and manage slider images within a content management system (CMS), including uploading, cropping, and linking functionality.
Apply best practices for maintaining the recommended slider image dimensions and safe zones when directly editing and modifying images in the CMS.
Setting Up Image Sliders on Websites (0:00)
Josh Lomelino introduces the topic of setting up image sliders on websites, emphasizing their use on the home page and other pages.
He highlights the importance of mobile responsive design, showing how sliders can be clicked through and swiped on different devices.
Josh explains the concept of mobile-first design and how to use developer tools to toggle between different device formats.
He mentions the importance of optimizing sliders for various form factors, including landscape and portrait modes.
Optimizing Sliders for Mobile Responsive Design (2:01)
Josh discusses the challenges of ensuring text visibility and avoiding text cutoff in sliders.
He demonstrates how to test sliders using developer tools and highlights the importance of keeping key information within the safe zone.
Josh shows an example of a slider that is not optimized and compares it to a well-optimized one, emphasizing the need for proper image cropping.
He explains how to use developer tools to analyze the responsiveness of sliders on different devices.
Using Templates for Image Sliders (4:28)
Josh introduces templates linked on the page that help users create amazing sliders with minimal effort.
He explains how to use the home page slider design template in Photoshop or other applications like GIMP or Canva.
Josh demonstrates how to open the PSD file, turn visibility on and off for different slider layouts, and add text overlays.
He emphasizes the importance of safe regions and proper image dimensions for optimal display on various devices.
Implementing and Optimizing Sliders (10:13)
Josh shows how to drag and drop images into the template and export them for use on the website.
He explains the importance of optimizing images for mobile to ensure fast loading times and minimal bandwidth consumption.
Josh demonstrates how to export images using Photoshop's "Save for Web" feature and adjust file sizes for optimal performance.
He shows how to upload and integrate the exported image into the CMS, ensuring proper linking and formatting.
Managing Images in the CMS (13:22)
Josh explains how to modify existing sliders or create new ones in the CMS.
He demonstrates how to specify the number of items in a slider and link images to specific pages.
Josh shows how to upload images directly into the CMS and ensure they are properly formatted and linked.
He explains how to use the CMS to crop and modify images directly, maintaining the recommended dimensions for mobile responsiveness.
Keywords: audio,recording,microphone,quality,live,studio,interface,phantom,power,sample,rate,uncompressed,format,pop,filter,level,balancing,Camtasia,Studio,file,organization,voice,clone,AI,avatar,sound,absorption
This video provides a comprehensive guide to professional audio recording for content creators, focusing on essential equipment and techniques for high-quality sound production. Viewers will learn how to select the right microphone, set up a proper recording environment, and use audio interfaces and editing tools to capture clean, professional-grade audio. By following Josh Lomelino's expert advice, participants will be able to create polished audio recordings suitable for workshops, demos, podcasts, and even AI-generated video content. The tutorial equips creators with practical skills to improve their audio recording process and produce more engaging, professional-sounding content.
This video provides a comprehensive guide to professional audio recording for content creators, focusing on essential equipment and techniques for high-quality sound production. Viewers will learn how to select the right microphone, set up a proper recording environment, and use audio interfaces and editing tools to capture clean, professional-grade audio. By following Josh Lomelino's expert advice, participants will be able to create polished audio recordings suitable for workshops, demos, podcasts, and even AI-generated video content. The tutorial equips creators with practical skills to improve their audio recording process and produce more engaging, professional-sounding content.
Here are the key things you will be able to do after you watch this demo:
Select an appropriate high-quality microphone for professional audio recording
Set up a clean, noise-free recording environment
Configure audio interfaces and software for optimal sound capture
Choose the correct sample rate and recording format
Use a pop filter and mic positioning techniques to improve audio quality
Perform audio test recordings and evaluate sound levels
Utilize audio editing tools for recording and post-production
Implement file organization strategies for audio projects
export audio files in various formats for different content needs
Create consistent, professional-grade audio recordings for workshops, demos, and presentations
Prepare audio recordings for potential AI avatar or voice clone generation
Troubleshoot common audio recording and equipment setup challenges
Basic Method of Production 0:09
Josh Lomelino explains the simplicity and power of recording thoughts and ideas using just a microphone.
Live recordings during workshops or demos are more engaging but harder to edit if mistakes are made.
Studio recordings allow for pauses and polished takes but require maintaining a natural and conversational tone.
The importance of a high-quality microphone and a quiet, clean recording space is emphasized.
Microphone Setup and Recording Quality 1:31
Josh recommends the AKG condenser mic for its clean, detailed sound, which requires phantom power.
The Shure 57 microphone is mentioned as a versatile option for various recording situations.
The Zoom H6 USB audio interface is preferred for its compatibility with various software like Camtasia.
Recording at 48,000 Hz instead of the default 44.1 Hz is suggested to preserve audio detail.
Audio Recording Practices 3:18
Josh advises recording in an uncompressed format like WAV until the final export to avoid audio degradation.
Ensuring the computer and audio interface are set to the same sample rate prevents speed mismatches.
The use of a pop filter and an adjustable mic arm helps maintain consistent audio quality.
Test recordings and listening on different devices help ensure balanced sound levels.
Audio Editing and Tools 4:53
Josh mentions various audio editing tools like Audacity, Adobe Audition, Pro Tools, and FL Studio.
Camtasia Studio is recommended for its convenience in recording and managing audio projects.
The Auto Normalize feature in Camtasia helps maintain consistent volume throughout recordings.
exporting recordings as MP3s allows for generating on-camera videos using AI avatars.
File Organization and Studio Setup 5:55
A consistent naming system for recordings and exports is crucial for easy retrieval and updates.
Avoiding rooms with echo and using soft materials to absorb sound helps improve recording quality.
A good studio setup, including soundproofing and proper equipment, is essential for high-quality recordings.
Josh hints at a future demo on creating a voice clone, which requires clean and consistent audio recordings.
In this video, Josh Lomelino demonstrates how to create an AI-powered digital voice replica using 11 Labs, enabling content creators to rapidly generate high-quality audio and video content at scale. By training the system with a consistent audio sample, users can produce automated voice performances that sound like their own, allowing them to create lectures, demos, and other content quickly and efficiently. The method involves uploading 1-3 hours of controlled audio recordings, fine-tuning voice settings, and integrating with platforms like HeyGen to automate video production. After watching this tutorial, viewers will be able to develop their own AI voice clone, streamline content creation, and overcome time constraints by generating multiple scripts and videos with minimal manual effort.
In this video, Josh Lomelino demonstrates how to create an AI-powered digital voice replica using 11 Labs, enabling content creators to rapidly generate high-quality audio and video content at scale. By training the system with a consistent audio sample, users can produce automated voice performances that sound like their own, allowing them to create lectures, demos, and other content quickly and efficiently. The method involves uploading 1-3 hours of controlled audio recordings, fine-tuning voice settings, and integrating with platforms like HeyGen to automate video production. After watching this tutorial, viewers will be able to develop their own AI voice clone, streamline content creation, and overcome time constraints by generating multiple scripts and videos with minimal manual effort.
Here are the key things you will be able to do after you watch this demo:
Train an AI voice synthesis system using personal audio recordings
Generate consistent voice replicas with controlled audio samples
Optimize AI-generated voice settings for natural-sounding output
Integrate voice cloning technology with video production platforms
Create automated content at scale using text-to-speech technologies
Manage AI voice generation credits efficiently
export and store audio files in multiple formats for different applications
Prototype and refine scripts using AI voice technology
Develop a workflow for rapid content creation across lectures, demos, and presentations
Leverage AI tools to overcome time constraints in content production
Creating a Voice Replica Using AI 0:09
Josh Lomelino discusses the use of AI-powered voice synthesis to create a voice replica, emphasizing the challenge of matching human recordings.
He highlights the effectiveness of using text prompts to quickly prototype, test, and revise scripts or generate finished audio files.
Josh mentions his preference for the 11 labs tool, which offers a studio mode for producing longer form audio tracks.
He shares his initial struggles with the tool and how contacting their support provided helpful suggestions.
Training the System for Consistent Output 1:24
Josh explains the importance of training the system with a consistent audio sample to avoid unnatural variations in volume and tone.
He describes his initial mistake of using diverse recordings from different sessions, which led to inconsistent results.
Josh emphasizes the need for a controlled environment with a single, consistent audio sample for better results.
He plans to demonstrate the settings that produce the best results for replicating his voice in the user interface.
Optimizing Generated Audio Files 2:56
Josh advises generating audio sparingly to avoid exhausting monthly credits and recommends starting with smaller sections of text.
He explains the process of refining the output and generating both wave and mp3 audio files for different applications.
Josh mentions the importance of storing both wave and mp3 files for secure storage and project organization.
He notes that it may take several attempts to develop a method that works well for the user.
exporting and Integrating Audio Files 4:19
Josh describes two methods for uploading audio files to virtual avatars: exporting both wave and mp3 versions or integrating the 11 labs API directly with Hey Gen.
He prefers using the wave audio file for higher quality and to avoid double compression but acknowledges the need to export the mp3 format for larger tracks.
Josh explains the integration of the 11 labs API with Hey Gen, which allows for rapid development of prototypes and large volumes of content.
He mentions the need to break up scripts into manageable sections for efficient processing by the software.
Automating Video Production with AI 6:02
Josh discusses the ability to produce videos at scale by automating both audio and video avatars from text.
He highlights the productivity gains from using AI to generate video scripts and produce audio and video automatically.
Josh notes the cost of AI-generated voice and the strategy of using high-quality audio only when necessary.
He explains the use of draft versions of scripts with Hey Gen's voice replica to refine the script without incurring additional costs.
Finalizing and exporting Scripts 8:04
Josh describes the process of finalizing scripts and either reading and recording them manually or using the 11 labs integration within Hey Gen.
He mentions the use of a side-by-side display setup with a Google document and video avatar performance for quick edits.
Josh emphasizes the usefulness of this method for high-end projects that require detailed polishing and iteration.
He concludes the demo by encouraging the use of digital voice replicas to scale beyond time constraints and improve productivity.
Keywords: Screen,recording,live,audio,Camtasia,high,resolution,4K,8K,graphics,processing,unit,system,specifications,test,recordings,MP4,file,video,quality,rendering,process,artificial,intelligence,computer,generated,avatar,performance,optimization
In this video, Josh Lomelino teaches how to create high-quality screen recordings with separate audio tracks, providing flexibility in content creation. Viewers will learn technical tips for recording at 4K or 8K resolution, including how to optimize system settings, graphics performance, and recording software. The tutorial demonstrates how to use Camtasia's features like the F9 hotkey to pause and resume recording seamlessly, allowing for more natural and efficient content production. By following these techniques, creators can produce professional-looking screen recordings with minimal post-production editing.
In this video, Josh Lomelino teaches how to create high-quality screen recordings with separate audio tracks, providing flexibility in content creation. Viewers will learn technical tips for recording at 4K or 8K resolution, including how to optimize system settings, graphics performance, and recording software. The tutorial demonstrates how to use Camtasia's features like the F9 hotkey to pause and resume recording seamlessly, allowing for more natural and efficient content production. By following these techniques, creators can produce professional-looking screen recordings with minimal post-production editing.
Configure computer settings for high-resolution screen recording
Optimize graphics acceleration for smooth video capture
Use Camtasia's F9 hotkey to pause and resume screen recordings
Separate screen and audio recording for more flexible content creation
Select appropriate system specifications for 4K and 8K recording
Troubleshoot audio and video synchronization issues
export screen recordings with optimal file quality settings
Implement a streamlined recording workflow that reduces post-production editing time
Screen Recording and Audio Recording Techniques 0:00
Josh Lomelino introduces the session on creating a screen recording along with a live audio recording.
He explains the benefits of recording screen and audio independently, allowing for more flexibility and less editing time.
Josh mentions the use of a hot key (F9) in Camtasia to pause and resume recording without worrying about facial expressions.
He highlights the ability to pause and resume recording to research or practice, making the final edit seamless.
Technical Challenges and Solutions for High-Resolution Recording 2:02
Josh discusses the technical challenges of recording high-resolution footage, such as 4K or 8K, and the importance of meeting system specifications.
He emphasizes the need for a dedicated graphics processing unit (GPU) to handle the workload and ensure better performance.
Josh advises checking system specifications against recording software to confirm compatibility.
He suggests ensuring the primary monitor supports the desired resolution to avoid issues during recording.
Optimizing Graphics Acceleration Settings 3:13
Josh provides detailed steps to optimize graphics acceleration settings for high-performance recording.
He recommends configuring the graphics card for high performance and setting the operating system to high performance mode.
Josh advises checking the recording software settings for optimal performance.
He suggests running test recordings to ensure audio and video sync and to avoid post-recording editing issues.
Final export and Rendering Tips 4:35
Josh advises using Camtasia's optimal settings to produce an MP4 file with a quality setting of around 75% for manageable file sizes.
He recommends capturing multiple screen recordings that can be compiled into a single video.
Josh suggests following through with the entire rendering process when exporting the final video.
He concludes the session by encouraging practice and looking forward to seeing the participants' creations.
Keywords: Screen,recording,audio,capture,on-camera,presentation,production,challenges,lighting,consistency,studio,lights,color,temperature,LED,panels,backlights,kicker,light,digital,double,4K,webcam,system,performance,green,screen,Camtasia
In this video, Josh Lomelino demonstrates Method Three for creating engaging screen recordings that combine on-camera presence, screen capture, and audio. Viewers will learn how to set up professional lighting using LED panels, choose the right camera equipment, and optimize their recording environment for high-quality video production. The tutorial covers essential techniques for maintaining visual continuity, managing lighting color temperatures, and using tools like Camtasia and green screens to create polished, professional-looking video content. By following Josh's guidance, content creators will be able to produce dynamic, natural-looking screen recordings with improved technical quality and visual appeal.
In this video, Josh Lomelino demonstrates Method Three for creating engaging screen recordings that combine on-camera presence, screen capture, and audio. Viewers will learn how to set up professional lighting using LED panels, choose the right camera equipment, and optimize their recording environment for high-quality video production. The tutorial covers essential techniques for maintaining visual continuity, managing lighting color temperatures, and using tools like Camtasia and green screens to create polished, professional-looking video content. By following Josh's guidance, content creators will be able to produce dynamic, natural-looking screen recordings with improved technical quality and visual appeal.
Here are the key things you will be able to do after you watch this demo:
Manage on-camera and screen recording simultaneously
Maintain visual continuity during video recordings
Set up professional lighting using LED panels
Adjust color temperature and brightness for optimal video quality
Create a three-point lighting setup with key, fill, and kicker lights
Select and configure appropriate camera equipment for video production
Optimize system performance for screen and camera recording
Use a Wacom tablet for digital whiteboarding
Implement green screen techniques for background removal
Combine multiple video takes into a seamless recording
export and render high-quality video files
Create digital double avatars for reusable content
Troubleshoot common video production challenges
Select and position lighting equipment safely
Integrate on-camera performance with slides and screen recordings
Method Three Demo and Challenges 0:08
Josh Lomelino introduces method three, which involves screen recording, audio, and on-camera capture, emphasizing its ability to capture natural, unscripted moments.
He highlights the challenges of managing both screen and camera presence simultaneously, including the need to maintain a consistent camera angle and expression.
Josh explains the importance of resuming recording with a neutral expression to ensure visual continuity.
He mentions the difficulty of pausing and resuming recording without noticeable edits when on camera.
Lighting Considerations for On-Camera Work 1:46
Josh discusses the significance of lighting in on-camera work, including the need to keep lighting consistent between takes.
He recommends using affordable studio lights, such as LED lights, which stay cool and are suitable for longer sessions.
Josh explains the concept of color temperature, noting that outdoor light can affect indoor lighting and cause color shifts.
He suggests using LED lights that allow adjustments in brightness and color temperature to manage lighting effectively.
Setting Up Lighting Equipment 3:02
Josh shares his preference for the Spectro Essential 360 LED panels, which range from 3250 to 6000 Kelvins and are dimmable.
He describes his typical setup, which includes stacking four LED panels in front and sometimes behind him to create soft, even light.
Josh emphasizes the importance of using back lights to create a "kicker light" effect, which helps outline the subject and makes them stand out from the background.
He advises adding weight to light stands to prevent them from tipping if bumped.
Camera and Recording Equipment 6:07
Josh talks about using a full-frame camera like the Canon 5D Mark III for high-quality recordings, but notes that a good 4K webcam can also deliver excellent results.
He recommends Logitech webcams, such as the Logitech 1080P cam, for their affordability and performance.
Josh explains the benefits of recording screen and camera separately, especially if the system can't handle 4K video and screen capture simultaneously.
He mentions the use of digital double avatars for reusing lighting and performance footage.
Optimizing System Performance and Audio Settings 8:15
Josh advises optimizing the graphics card and operating system for better system performance.
He recommends setting the microphone to 48,000 hertz and ensuring phantom power is turned on through the sound interface.
Josh suggests using a Wacom tablet for live whiteboarding, either with the Cintiq for direct drawing or a more affordable tablet for drawing on a pad.
He emphasizes the importance of setting pen lines thick enough to show clearly in high-resolution recordings.
Using Camtasia and Green Screens 9:20
Josh highlights Camtasia's ability to combine multiple takes into one smooth recording and overlay on-camera performance videos on PowerPoint slides.
He explains the use of green screens for added flexibility, including the need to light the green screen evenly and separately from the face lighting.
Josh mentions the built-in removal tool in Camtasia for easily removing the green screen background.
He advises fine-tuning the green screen setup to avoid issues with hair and shoulder edges.
exporting and Backing Up Videos 10:17
Josh recommends exporting videos as MP4 files using the H.264 format with a rendering quality around 75%.
He advises keeping files organized and backed up for potential updates.
Josh mentions the use of green screens for recording digital double avatar videos, which can be easily removed from the background later.
He concludes the demo by encouraging viewers to invest in high-quality audio and video assets for better results.
Keywords: AI-generated,video,4K,resolution,workflow,optimization,content,longevity,editing,software,avatar,export,quarter,screen,principle,green,workflows,automated,production,performances,audio,files,text-to-performance,tools,cloud,storage,local,backups
In this video, you'll learn how to create a digital double avatar for automated video production, with a focus on optimizing workflow and resolution strategies. You'll discover techniques for producing high-quality avatars, including how to effectively composite 1080p avatars into 4K projects and create flexible avatar sets with multiple poses and angles. The tutorial will guide you through green screen workflows and demonstrate methods for automating avatar performances using audio and text-to-performance tools. By the end, you'll have a comprehensive understanding of how to efficiently generate professional-looking AI-driven video content with your digital avatar.
In this video, you'll learn how to create a digital double avatar for automated video production, with a focus on optimizing workflow and resolution strategies. You'll discover techniques for producing high-quality avatars, including how to effectively composite 1080p avatars into 4K projects and create flexible avatar sets with multiple poses and angles. The tutorial will guide you through green screen workflows and demonstrate methods for automating avatar performances using audio and text-to-performance tools. By the end, you'll have a comprehensive understanding of how to efficiently generate professional-looking AI-driven video content with your digital avatar.
Following are the key things you will be able to do after you watch this demo:
Select optimal video resolution for long-term content creation
Composite avatar videos into 4K projects using the quarter-screen technique
Design flexible avatar sets with multiple camera angles and poses
Implement cost-effective workflows for digital avatar production
Batch produce avatar videos efficiently
Utilize green screen techniques for high-quality avatar generation
Automate avatar performances using audio and text-to-performance tools
Future-proof video content by understanding resolution strategies
Create visually engaging educational or presentation videos with digital avatars
Optimize video production workflow for AI-generated content
Overview of Creating a Digital Double Avatar 0:08
Josh Lomelino introduces the video as an overview of creating a digital double avatar, emphasizing the importance of early workflow considerations for automated video production.
He highlights the significant decision of choosing between HD at 1080p and Ultra HD at 4k or higher, noting that while 1080p is faster and more economical, 4k offers better future-proofing.
Josh recommends producing videos in 4k for longevity, ensuring the platform supports 4k playback, and mentions that Anomaly Amp supports this out of the box.
For cost-effective 4k output, he suggests exporting the avatar at 1080p and compositing it over a 4k background in video editing software like Premiere or Camtasia.
Techniques for Achieving 4k Output 2:12
Josh explains that exporting avatars in 4k can be costly, but exporting at 1080p and compositing it in a 4k project maintains full resolution without quality loss.
He describes the quarter screen principle, where the avatar is positioned in the bottom right-hand corner of the screen, enhancing the learning experience with foreground and background visuals.
Josh advises producing the original avatar in 4k and storing it at full resolution in both cloud storage and local backups, but notes that most people will render videos in 1080p.
He outlines the process of creating an avatar set with multiple camera angles, standing and sitting poses, and options with and without hand gestures, providing a flexible collection for different needs.
Green Screen Workflows and Automation 3:33
Josh discusses green screen workflows, offering tips for achieving strong results even without a high-end green screen.
He explains how to batch produce avatars efficiently, saving time with a streamlined workflow.
Josh introduces the concept of fully automating avatar performances using audio files or AI-generated audio and video with text-to-performance tools.
He concludes the demo by mentioning that he will cover these topics in more detail in future videos, encouraging viewers to stay tuned for further instruction.
Keywords: Green screen, virtual avatar, training video, RGB, Ultra Key
In this tutorial, Josh demonstrates how to create a versatile virtual avatar using a green screen background. By following his step-by-step process, viewers will learn to record a training video, use video editing software to remove the background, and export a high-quality 4K file for avatar creation. The technique allows users to generate a digital double that can be placed on any background, enabling them to create numerous training videos, presentations, and lectures without being physically present. Ultimately, viewers will gain the skills to produce an AI avatar that can work continuously, freeing up their personal time while maintaining professional content production.
In this tutorial, Josh demonstrates how to create a versatile virtual avatar using a green screen background. By following his step-by-step process, viewers will learn to record a training video, use video editing software to remove the background, and export a high-quality 4K file for avatar creation. The technique allows users to generate a digital double that can be placed on any background, enabling them to create numerous training videos, presentations, and lectures without being physically present. Ultimately, viewers will gain the skills to produce an AI avatar that can work continuously, freeing up their personal time while maintaining professional content production.
Following are the key things you will be able to do after you watch this demo:
Shoot a training video using a green screen background
Apply the ultra key filter in video editing software
Create a 100% green color matte
Remove background elements from video footage
export high-quality 4K video files
Generate a virtual avatar using AI software
Render digital doubles for multiple presentations
Layer virtual avatars over different backgrounds
Integrate avatar presentations with PowerPoint and Canva slides
Produce training content without physical studio time
Creating a Virtual Avatar with a Green Screen Background 0:08
Josh Lomelino explains the importance of using a green screen background for creating virtual avatars, emphasizing versatility and ease of use.
He describes the general principle of achieving a 100% green background in the RGB model, noting the difficulty of achieving perfect green.
Josh introduces simple steps to help with the process, including shooting a two-minute training video on a green screen and using 100% green shapes in video editing software.
He demonstrates the use of the ultra key filter in video editing software to eliminate the background and adjust settings like feathering, key color, and matte cleanup.
Setting Up the Green Screen Workflow 5:18
Josh explains the creation of a 100% green color matte in video editing software, specifying the width and height to be 4k.
He describes layering the green clip underneath the video track and extending it to the same length as the training clip.
Josh mentions the importance of placing additional green color mats to fix any spillover areas and avoid relying solely on the ultra key effect.
He outlines the process of setting in and out points, exporting the clip as an MP4 file, and using Adobe Media Encoder for batch rendering.
exporting and Adjusting Settings 8:12
Josh details the export settings, including using the h264 codec for high quality and specifying the file type as MP4.
He emphasizes the importance of evenly lighting the green screen for a better key and mentions common issues like wrinkles and folds.
Josh shows how to create a new avatar in Hey Gen or other virtual avatar software, validating the model by reading a code aloud.
He explains the process of uploading source material, validating the camera angle, and retaining 4k footage for higher resolution renders.
Using the Virtual Avatar in Various Productions 11:27
Josh discusses the flexibility of using the virtual avatar in presentations, lectures, and demos, including mixing with PowerPoint slides and Canvas slides.
He highlights the ability to create unlimited digital doubles and the importance of not checking the AI remove background option.
Josh explains the use of Camtasia's Remove Color effect to key out the green color in the background and the importance of using high-quality settings.
He advises against using proxy footage for making decisions about green screen settings and emphasizes the need for maximum quality settings in video editing software.
Final Steps and Infinite Possibilities 14:54
Josh concludes by mentioning the infinite possibilities of the workflow, including creating presentations directly inside Hey Gen.
He discusses integrating with Canva for timed slide changes and animations, and the option to check the background removal button for a transparent background.
Josh reiterates the importance of using the method shown in the video to achieve 4k production quality, even if it requires a more expensive plan.
He wraps up the demo, encouraging viewers to explore the various applications and approaches for their virtual avatars.
Keywords: batch, avatar, digital-double, production, lighting, setup, color, correction, video, editing, project, HeyGen, encoder
In this tutorial, Josh Lomelino demonstrates a comprehensive workflow for efficiently batch producing multiple virtual avatars with consistent lighting and color quality. Viewers will learn how to set up precise video editing project settings, create a master sequence with multiple camera angles, and use Adobe Media Encoder to render individual clips for avatar training. The technique allows content creators to scale their avatar production, quickly export multiple versions of their digital doubles, and maintain a well-organized project structure that enables future edits and refinements. By following this method, users can streamline their avatar creation process, saving significant time and producing high-quality, professional virtual representations.
In this tutorial, Josh Lomelino demonstrates a comprehensive workflow for efficiently batch producing multiple virtual avatars with consistent lighting and color quality. Viewers will learn how to set up precise video editing project settings, create a master sequence with multiple camera angles, and use Adobe Media Encoder to render individual clips for avatar training. The technique allows content creators to scale their avatar production, quickly export multiple versions of their digital doubles, and maintain a well-organized project structure that enables future edits and refinements. By following this method, users can streamline their avatar creation process, saving significant time and producing high-quality, professional virtual representations.
Following are the key things you will be able to do after you watch this demo:
Configure video editing project settings to match camera specifications
Create a systematic numbering and organization system for avatar sequences
Set up multiple camera angles within a single project
Use Adobe Media Encoder to batch render avatar clips
export individual video files for virtual avatar training
Implement color correction and LUT modifications across multiple clips
Organize project files for efficient content production
Develop a scalable workflow for mass avatar creation
Troubleshoot and remove performance anomalies in avatar recordings
Back up and preserve digital asset production files
Setting Up Lighting and Color Values 0:08
Josh Lomelino explains the importance of setting up lighting and color values once to achieve consistent results over time.
He emphasizes the need to test lighting and color values before batch producing a group of avatars.
Josh mentions the flexibility to make further adjustments later using L, U, T color modifications or color correction tools.
The workflow allows for the efficient production of 10 to 50 avatars, ensuring visual polish from the start.
Consistency in Project Settings 1:42
Josh highlights the necessity of matching video editing project settings to the specifications of the recording camera.
He provides an example of setting up a project for a Logitech 4k camera and ensuring consistency in frame size and frame rate.
Josh advises checking file properties to extract frame size and frame rate if unsure.
Consistency in project settings is crucial for mass producing different clips.
Creating a Master Sequence 2:59
Josh sets up a master sequence to serve as a template for duplicating sequences as needed.
He uses a clear numbering system for sequences, labeling each avatar with a specific outfit and camera angle.
Examples include Avatar 001, DIRECT address, no hands, and Avatar 0013, quarter view.
Josh organizes sequences in a dedicated folder called a bin for project organization.
Batch Rendering with Adobe Media Encoder 4:56
Josh explains the process of adding clips to a Batch Render Queue using Adobe Media Encoder.
He selects in and out points for each camera angle, creating dedicated files for each angle.
Josh configures the encoder to render only the specified in and out range on the timeline.
Each camera angle should be exported as an individual MP4 file, specifying the folder location and file name.
Finalizing and Organizing Project Files 6:40
Josh emphasizes the importance of organizing project files, including original source files, rendered clips, and project files.
He advises saving the video editing project frequently as a fail-safe for future edits.
Josh highlights the need to review source footage for any performance anomalies and correct them.
The workflow allows for the removal of outdated avatars and recreation without problematic movements.
Backing Up and Scaling Content Production 8:25
Josh frequently backs up his entire project folder by compressing it into a zip file for disaster recovery.
He mentions the time investment upfront to create polished assets and resolve hiccups.
Josh advises starting with manual methods and gradually scaling to more advanced techniques.
The well-organized project structure saves time, enables content production scaling, and supports high-performance results.
Keywords: Automated, performance, audio, file, high-quality, microphone, digital, avatar, recording, Camtasia
Automate Performances from Audio
Learn how to create a professional automated performance using digital avatars by recording high-quality audio and seamlessly integrating it with a virtual presenter. This technique allows you to transform audio recordings into engaging video content, whether from live presentations, scripts, or screen recordings. You'll discover how to export audio files, align a digital avatar's movements, and use chroma key technology to place your virtual presenter on any background. By mastering this workflow, you can produce polished, context-rich video dem
Following are the key things you will be able to do after you watch this demo:
Record high-quality audio using professional recording software
export audio files in multiple formats (WAV and MP3)
Upload audio recordings to a digital avatar platform
Align digital avatar movements precisely with audio tracks
Render video performances from audio recordings
Remove background using chroma key techniques
Integrate digital avatars into various visual backdrops
Repurpose existing audio from presentations or demos
Create automated video content without on-camera performance
Optimize audio files for different digital platforms
Creating an Automated Performance Using Audio 0:08
Josh Lomelino explains two options for creating an automated performance: using a text-to-speech generated audio file or recording the performance using a high-quality microphone.
He emphasizes that recording with a high-quality microphone yields the best results and will demonstrate this method in the demo.
Josh mentions that the next demo will cover creating a fully automated performance using text, automating the entire process from audio capture to video production.
He notes that while the automated process is efficient, it may not match the quality of a live performance.
Preparing and exporting Audio Recordings 1:09
Josh discusses the importance of using a high-quality audio file for the best results and mentions uploading the audio recording to a digital avatar.
He explains the need to export an uncompressed WAV file and an MP3 file optimized for web use, highlighting the importance of having both options ready.
Josh typically records his audio directly into Camtasia, which he finds to be the fastest way to capture high-quality audio for quick editing.
He demonstrates how to export a local file and choose between saving it as a WAV or MP3 file, noting that other audio editing tools can also be used.
Generating Video Performance with Digital Avatar 2:29
Josh explains the process of generating a video performance by dragging and dropping the audio file into the project and adjusting the start and end times of the digital avatar.
He mentions exporting the production to render the performance into an MP4 file and downloading it into the project.
Josh highlights the use of the chroma key or ultra key function to remove the background and seamlessly integrate the digital avatar into any backdrop.
He provides examples of using this technique for reading from a script, repurposing audio from live presentations, and creating matching visuals with on-camera performances.
Combining Performance Modalities and Future Demos 3:54
Josh discusses the challenges of managing all three performance modalities (screen recording, audio, and digital avatar) simultaneously and the importance of practicing beforehand.
He explains how to export the audio from a demo, generate a digital avatar, and overlay it onto the video, showing the versatility of combining these elements.
Josh mentions upcoming demos that will cover generating audio using generative AI from text alone, creating a fully automated workflow.
He will also demonstrate automating the creation of slides and the precise timing of each slide's animation, allowing for a completely hands-free production system.
Keywords: Automated, performance, text, video, Otter, AI, voice, clone, Eleven Labs, HeyGen, audio, multilingual
In this video, Josh demonstrates how to create fully automated video performances directly from text using tools like Otter AI, 11 Labs, and HeyGen. Viewers will learn how to generate high-quality voice clones, prototype video scripts, and produce professional-looking content with minimal effort by leveraging AI-powered voice and video generation technologies. The workflow allows content creators to transform written or spoken text into polished video presentations quickly and efficiently. By following Josh's method, users can generate multiple video iterations, edit audio precisely, and create digital avatars that replicate their voice and performance with remarkable accuracy.
In this video, Josh demonstrates how to create fully automated video performances directly from text using tools like Otter AI, 11 Labs, and HeyGen. Viewers will learn how to generate high-quality voice clones, prototype video scripts, and produce professional-looking content with minimal effort by leveraging AI-powered voice and video generation technologies. The workflow allows content creators to transform written or spoken text into polished video presentations quickly and efficiently. By following Josh's method, users can generate multiple video iterations, edit audio precisely, and create digital avatars that replicate their voice and performance with remarkable accuracy.
Following are the key things you will be able to do after you watch this demo:
Generate video scripts from transcribed audio using AI tools
Create high-quality voice clones with consistent audio recordings
Prototype video content using free and paid AI platforms
Optimize voice training for digital avatars
Manage content production across multiple AI environments
Edit audio tracks with minimal credit consumption
Develop a systematic workflow for automated video creation
Replicate personal performance using digital voice technology
Transform text-based content into professional video presentations
Implement cost-effective strategies for video and audio generation
Creating a Fully Automated Performance from Text 0:08
Josh Lomelino explains the process of creating a fully automated performance directly from text, including generating audio prompts using Otter AI.
He describes how he brainstorms ideas while walking and exports the subtitle transcript file, SRT, to process it with AI tools like Claude or ChatGPT.
Josh mentions breaking up long scripts into manageable blocks of 1800 characters and generating a year's worth of content for various platforms.
He emphasizes the use of text, whether written manually or spoken and transcribed, to craft a video script using two primary methods.
Generating High-Quality Voice Clones 1:51
Josh discusses creating a high-quality voice clone using 11 Labs, initially finding the results artificial but later perfecting the settings.
He highlights the importance of using a consistent audio clip for training the voice digital double, ideally around three hours of spoken audio.
Josh explains the challenges of recording consistently for three hours and how he stitches together previous demo recordings to create a large audio clip.
He stresses the need for meticulous tracking of audio settings to ensure uniformity and avoid sudden changes in volume or tonal quality.
Optimizing Audio Recording for Consistency 3:36
Josh shares his experience of recording multiple live sessions with an audience, which infused the audio with personality and energy.
He explains the importance of having consistently dialed-in audio for generating a high-quality performance, as the AI listens to everything in the audio track.
Josh mentions the time and cost involved in using 11 Labs, which can take up to six to eight hours to analyze a voice and build a model.
He advises against using cheaper models, such as the multilingual version one model or turbo 2.5, and recommends upgrading to the multilingual version two model for better results.
Using Hey Gen for Cost-Effective Prototyping 5:35
Josh introduces Hey Gen as an alternative for creating generative content when 11 Labs burns through credits too quickly.
He explains how he trains Hey Gen on his voice by uploading a 10 to 15-minute audio clip and generates unlimited videos for free, depending on the subscription plan.
Josh describes the process of creating prototypes, making real-time adjustments to the script, and rendering multiple takes.
He mentions using his phone in split screen mode while walking to make adjustments on the fly and then copying and pasting the revised script into Hey Gen.
Switching Between Hey Gen and 11 Labs 7:44
Josh explains how he can switch the voice in Hey Gen to the high-quality production voice in 11 Labs with a click of a button.
He highlights the downside of using Hey Gen, which is the risk of losing all credits if there are issues with the audio track in the final video.
Josh prefers using the Studio tool in 11 Labs for targeted editing, which allows regenerating just portions of the audio without redoing the entire clip.
He mentions the benefit of being able to download the WAV file and MP3 file from the Studio tool in 11 Labs as a fail-safe.
Organizing Video Production Phases 9:21
Josh describes his workflow of treating production as two phases: the cheap, free voice phase and the final phase.
He explains the process of pasting the text directly into the Hey Gen editor, listening to the prototype, and resolving issues before creating a new file in Hey Gen.
Josh organizes his videos into two folders: a prototype folder and a final folder, for easy organization of his methods.
He mentions using the multilingual version two model for cost-effective throwaway tests and training his voice with Hey Gen for free prototyping.
Leveraging Digital Doubles for High-Quality Videos 10:34
Josh shares how he uses his digital doubles to replicate a performance of his voice and generate a corresponding video composite.
He explains how he creates a script using Otter AI during a walk, copies and pastes it into his automated workflow, and produces a high-end video with minimal effort.
Josh highlights the benefits of this workflow, which allows him to deliver excellence without skipping a beat, even when small inconsistencies would have derailed the process before.
He concludes by mentioning the next steps in the following videos, which will cover adding automated visual elements on screen behind the virtual avatar.
Keywords: AI, Claude, Chat GPT, brainstorming, video, script, otter, SRT, transcription, generative audio, bulk export, workflow
Generate Ideas with Otter and Claude
Josh demonstrates how to use AI tools like Otter AI, ChatGPT, and Hey Gen to quickly transform brainstorming transcripts into polished video scripts. By leveraging AI's capabilities, creators can capture their ideas, generate scripts, and create content with minimal manual editing. The workflow allows users to convert spoken thoughts into text, refine the script through AI assistance, and produce a final video with a digital avatar or voice clone. Viewers will learn a streamlined process for content creation that dramatically reduces production time and enables rapid, creative video generation.
Following are the key things you will be able to do after you watch this demo:
Capture brainstorming ideas using Otter AI transcription
export SRT files from recorded thoughts
Convert raw transcripts into structured video scripts
Leverage AI tools to refine and edit content automatically
Break down long scripts into manageable character blocks
Identify and correct potential AI pronunciation challenges
Generate video scripts with minimal manual editing
Prepare scripts for digital avatar or voice clone production
Batch process multiple transcripts simultaneously
Create content at scale using AI-assisted workflows
Using AI Tools for Content Creation 0:09
Josh Lomelino explains how AI tools help him capture ideas and generate content directly from brainstorming sessions.
He uses Otter AI to record his thoughts verbatim, which he then exports as an SRT file for transcription.
The SRT file contains every word spoken along with time codes, making it easy to generate a full video script.
Josh leverages AI tools like 11 Labs and Hey Gen to produce audio and video content from the transcribed text.
Generating Video Scripts from Transcripts 2:00
Josh describes the process of generating a video script from the transcribed text using AI tools.
He explains the difference between having a clear plan and a vague notion for the script.
The AI can capture random ideas and generate multiple scripts within the Otter AI application.
Josh then uses tools like Claude AI or ChatGPT to expand and refine the generated scripts.
Collaborative Writing with AI 2:35
Josh aims to create a video script that his digital double can read aloud, reducing the need for extensive editing.
He explains the collaborative writing process between himself and AI tools to generate drafts and revisions.
The ultimate goal is to use AI to create a polished video script without spending hours on manual editing.
Josh emphasizes the importance of spending time to perfect the AI prompting process.
Workflow for Converting SRT Files 3:51
Josh demonstrates the workflow for converting an SRT file into a video script using Otter AI and Notepad.
He highlights the importance of checking the prompts document for time-saving methods.
Josh explains two methods for creating video scripts: word-for-word transcription and general direction.
He provides detailed prompts for ChatGPT to convert SRT files into 1800-character blocks.
Handling Rough Brainstorming Transcripts 7:40
Josh discusses handling rough brainstorming transcripts that require more assistance from AI tools.
He explains the need to be mindful of checking each word when using AI to generalize the transcript.
Josh provides a prompt for ChatGPT to convert the SRT file into a video script and fix grammatical issues.
He emphasizes the importance of ensuring the script is readable by the AI digital double.
Challenges with AI-Generated Scripts 10:06
Josh mentions potential challenges with AI-generated scripts, such as mispronunciation by the digital double.
He explains the time-consuming process of manually correcting AI-generated scripts.
Josh introduces a prompt for a cleanup pass to automatically correct readability issues.
He advises copying and pasting the corrected script into the video script document for backup.
Finalizing the Video Script 12:23
Josh explains the final steps of rendering the script as a prototype using a free voice clone.
He advises listening to the playback and adjusting the script for pronunciation issues.
Once satisfied with the prototype, the final audio can be generated using tools like 11 Labs.
The final audio clip can then be uploaded to a virtual avatar software for the final on-screen performance.
Batch Processing Multiple SRT Files 13:21
Josh highlights the option to bulk export multiple SRT files from the Otter AI app for time savings.
He explains how this process can be applied to a whole folder of SRT files.
This method allows for the creation of massive amounts of content quickly and easily.
Josh concludes the demo by encouraging viewers to try the process for themselves.
Keywords: Automation, AI-generated, content, slides, video background, SRT, transcript
Automate Slide Data Creation
In this demo, Josh Lomelino reveals a powerful workflow for automating on-screen elements and slide creation using AI tools. Viewers will learn how to transform a transcript into a fully automated slide deck by leveraging AI platforms like Claude and ChatGPT to generate inspirational content with precise timing. The technique allows content creators to automatically generate slide content, export it to a CSV file, and prepare for seamless PowerPoint or Canvas slide production. By following this method, users can save significant time in presentation creation and eliminate manual slide transitions.
Following are the key things you will be able to do after you watch this demo:
Generate automated slide content using AI transcription tools
Extract precise time codes from transcripts for slide transitions
Transform raw transcripts into structured slide presentations
Use AI prompts to create inspirational and motivational slide copy
Convert slide data into JSON and CSV formats
Automate slide creation across multiple platforms (PowerPoint, Canvas)
Optimize slide timing and pacing for engaging presentations
Leverage AI tools to reduce manual presentation development time
export transcription data for seamless content repurposing
Create consistent and professional slide decks without manual intervention
Automating On-Screen Elements with AI 0:09
Josh Lomelino introduces the demo, focusing on automating on-screen elements for lectures or demos.
He explains the use of AI-generated voice, digital double avatar, and automated slide content.
Josh emphasizes the importance of the vocal track in automating the entire performance.
He mentions using either an SRT file or transcription tools like Otter AI or Loom for accurate time codes.
Using Loom for Precise Time Codes 1:24
Josh advises using Loom for more accurate time codes compared to Otter AI.
He explains the challenges of automating slide transitions and the importance of precise time codes.
Josh demonstrates how to export the SRT file and use it for automating slide transitions.
He highlights the need for accurate time codes to avoid manual recording and timing issues.
Generating Slide Content with AI 4:38
Josh shows how to use Claude AI to generate slide content based on the SRT file.
He explains the process of copying the SRT file into memory and using AI prompts to generate slide content.
Josh suggests making the slide content inspirational and motivational.
He emphasizes the importance of comparing and mixing AI-generated content to get the desired outcome.
Adjusting Slide Transition Timing 6:10
Josh discusses the importance of slide transition timing and how it affects the video's pacing.
He suggests using a fixed number of slides and adjusting the transition timing based on the video's feel.
Josh explains how to increase or decrease the number of slides while maintaining the conversational tone.
He highlights the need for accurate time codes to ensure smooth slide transitions.
Handling Time Code Issues 8:13
Josh addresses potential issues with time codes and suggests using Loom for more accurate data.
He explains how to adjust the number of slides based on the video's length and transition timing.
Josh provides prompts for asking AI tools to generate the correct number of slides and time codes.
He emphasizes the importance of accurate time codes for automating slide transitions.
exporting Slide Data to Excel 12:53
Josh shows how to export the slide data to an Excel file from AI-generated JSON data.
He explains the process of copying and pasting JSON data into an Excel file.
Josh suggests using a fail-safe strategy if the direct export method doesn't work.
He highlights the importance of having a clean data source for generating slides automatically.
Transforming JSON Data to CSV 13:59
Josh demonstrates how to transform JSON data into a CSV file using ChatGPT.
He explains the process of copying JSON data into ChatGPT and generating a CSV file.
Josh provides prompts for handling issues with special characters and ensuring clean data.
He emphasizes the importance of having a CSV file for automating PowerPoint or Canvas slides.
Final Steps for Automating Slides 18:03
Josh explains how to use the CSV file to generate PowerPoint or Canvas slides automatically.
He highlights the power of having all the necessary data for automating the presentation.
Josh mentions that the next demo will cover generating PowerPoint and Canvas slides in detail.
He concludes the demo by summarizing the key steps and the benefits of automating the presentation process.
Discover how to take your app idea from concept to high-fidelity MVP with lightning speed in this hands-on demo! You’ll learn how to organize product requirements, train AI tools using your own user stories, and craft powerful prompts that supercharge no-code and low-code platforms like Lovable and Thunkable. Watch step-by-step as we merge user insights, automate prototype creation, and iterate rapidly to build a functional, customizable app without writing code. Whether you're a founder, designer, or developer, this demo will empower you to launch better products, faster.
Discover how to take your app idea from concept to high-fidelity MVP with lightning speed in this hands-on demo! You’ll learn how to organize product requirements, train AI tools using your own user stories, and craft powerful prompts that supercharge no-code and low-code platforms like Lovable and Thunkable. Watch step-by-step as we merge user insights, automate prototype creation, and iterate rapidly to build a functional, customizable app without writing code. Whether you're a founder, designer, or developer, this demo will empower you to launch better products, faster.
After watching this video, viewers will be able to efficiently structure and document their product ideas, train AI tools with custom user stories and requirements, and generate detailed prompts for building full-featured app prototypes. They'll learn how to merge, organize, and optimize user stories to maximize productivity and reduce costs with AI-driven app builders like Lovable and Thunkable. By following these steps, viewers can rapidly create, customize, and iterate on high-fidelity MVPs, preparing their apps for further refinement and deployment. This workflow empowers users to leverage multiple no-code platforms and streamline their app development from concept to actionable prototype.
Following are the key things you will be able to do after you watch this demo:
Understanding Pricing and Pre-Composing Chats 0:11
Josh Lomelino explains the importance of understanding pricing in AI apps, emphasizing that credits are tied to prompts and chats.
He advises pre-composing chats in tools like ChatGPT to avoid high costs in apps like Lovable, which charge based on daily credits.
Josh demonstrates how to go back to prior steps in ChatGPT to train the system on user stories and features.
He highlights the need to ensure the chat is trained universally across all chats, otherwise, it needs to be asked to do so explicitly.
Training and Managing Chats 4:53
Josh discusses the process of training chats on system functionality, using SRT files as an example.
He explains the incremental compounding of work in Lovable, which makes it costly to start chatting without a well-defined prompt.
Josh emphasizes the importance of optimizing the use of credits to avoid high costs, comparing it to the cost of a development team.
He mentions the potential for the browser to choke on large chats and the need to break them into manageable parts.
Merging and Organizing User Stories 7:17
Josh demonstrates how to merge multiple chats to create a faster and more efficient chat.
He explains the process of outputting user stories as a CSV and the challenges with special characters in CSV files.
Josh suggests exporting as an Excel file to fix formatting issues.
He highlights the importance of incrementally building a pipeline to automate the creation of front-end interface screens.
Enhancing User Stories with Features and Acceptance Criteria 9:36
Josh adds a feature column to the user story backlog, differentiating it from user story language.
He includes acceptance criteria, which helps in testing and identifying the area within the app where the feature would exist.
Josh emphasizes the importance of documenting key wins and moments in a Google Doc for future reference.
He explains the process of comparing the current chat output with a saved Word file to ensure completeness.
Creating a Master Prompt for Lovable 17:44
Josh discusses the process of creating a master prompt for Lovable, which includes context, logical structure, explicit instructions, and adaptive considerations.
He highlights the need for granular detail to get specific UI controls in the prompt.
Josh explains the importance of saving the output as a Google Doc or GitHub repository for version control.
He demonstrates how to rewrite the master prompt to include all features in one MVP release.
Training Lovable on Documentation 42:48
Josh trains Lovable on the documentation of the tool, which helps in creating a prompt for Lovable.
He explains the process of crawling through the documentation pages and listing the pages learned from.
Josh emphasizes the importance of checking that the AI is actually doing what it claims to do.
He demonstrates how to extract and summarize recommendations from the AI.
Refining and Customizing the App 45:00
Josh refines and customizes the app by adjusting colors and mastering prompting.
He explains the process of using chat mode to plan additional features like a coach and admin portal.
Josh demonstrates how to toggle between different device types to test the app on various form factors.
He highlights the importance of iterating on the app to ensure it meets user needs and pain points.
Exploring Different Tools and Integrations 49:51
Josh explores different tools like Thunkable, Bubble IO, Cursor, Replit, Flutter Flow, and Draftbit.
He explains the process of training the AI on the documentation of these tools to create a single prompt.
Josh highlights the importance of integrating tools like Supabase and Airtable for data management.
He emphasizes the need to experiment with different tools to find the best fit for the project.
Finalizing the MVP and Next Steps 1:04:33
Josh finalizes the MVP by ensuring all features are included in the prompt.
He explains the process of exporting the code base and pushing it to GitHub for further development.
Josh highlights the importance of iterating on the app to ensure it meets user needs and pain points.
He explains the next steps of refining and customizing the app, and preparing it for deployment to the app stores.
Discover how to unlock your product’s potential with this hands-on demo! Learn to identify your audience’s biggest challenges, craft compelling scripts using leading marketing frameworks, and leverage AI-powered tools to create engaging vision videos. Walk away ready to prototype voiceovers, iterate on creative ideas, and connect with your audience through actionable storytelling that drives real results.
Discover how to unlock your product’s potential with this hands-on demo! Learn to identify your audience’s biggest challenges, craft compelling scripts using leading marketing frameworks, and leverage AI-powered tools to create engaging vision videos. Walk away ready to prototype voiceovers, iterate on creative ideas, and connect with your audience through actionable storytelling that drives real results.
This video guides viewers through recognizing and addressing key challenges like lack of clarity, inconsistency, and information overload. By following the step-by-step vision presented, viewers will learn how the app helps them transform these obstacles into opportunities for personal growth and productivity. After watching, audiences will be equipped to download the app, leverage its key features to build better habits, and take actionable steps toward positive change. The video empowers viewers to begin their own transformation journey right away.
Following are the key things you will be able to do after you watch this demo:
Creating a Vision Video Using Marketing Frameworks 0:10
Josh Lomelino explains the initial steps for creating a vision video, emphasizing the importance of the Ray Edwards framework.
The process involves identifying and amplifying pain points, telling a story, and transforming the narrative to lead to a call to action.
Josh introduces the Jeff Walker framework, which follows a similar pain-agitate-solve structure.
He discusses the use of ChatGPT to unearth pain points and personas, integrating this information into the script writing process.
Script Writing and User Problems 5:13
Josh details the process of writing a script using the Ray Edwards framework, focusing on the top three common problems.
He lists the top three problems: lack of clarity, inconsistency, and lack of accountability.
The script aims to show a transformation from pain to breakthrough, with a vision video lasting two to three minutes.
Josh emphasizes the importance of defining marketing before finishing the product to connect with the audience effectively.
Iterating the Script and Using Generative AI 10:44
Josh explains the process of creating multiple versions of the script, using ChatGPT and Claude AI for brainstorming and refining.
He highlights the importance of providing detailed instructions to the AI tools to ensure they stay within the desired framework.
Josh discusses the use of teleprompter scripts to ensure the spoken words are accurate and readable.
He mentions the use of 11 Labs for generating voiceovers, which helps in prototyping and refining the script.
Finalizing the Script and Preparing for Video Production 27:00
Josh talks about the importance of testing different versions of the script with focus groups to get valuable market feedback.
He explains the process of creating a Google Doc to keep track of different versions of the script and related content.
Josh introduces the Jeff Walker framework, which is used for product launches, and compares it with the Ray Edwards framework.
He discusses the final steps of creating the vision video, including generating animatics, storyboards, and visual content.
Generating Audio and Selecting Voices 36:23
Josh demonstrates the use of 11 Labs to generate audio performances from the script, using his own voice as a clone.
He explains the process of selecting and applying different voices from the 11 Labs library to experiment with different tones and styles.
Josh highlights the importance of exporting the audio in WAV format for higher quality and flexibility in editing.
He discusses the potential use of multiple voices to create a cast of characters in the vision video.
Editing and Refining the Vision Video 58:53
Josh outlines the next steps for editing the audio and video content, including creating animatics and storyboards.
He emphasizes the importance of aligning the visuals with the audio track to ensure the narrative flows smoothly.
Josh discusses the use of AI-generated video content for B-roll footage to show the app in use.
He concludes by summarizing the overall process of creating a vision video, from script writing to final production, and the role of various tools and frameworks in achieving this.
Unlock the power of AI to supercharge your product design process! This demo guides you through capturing raw ideas via voice recordings, organizing them into agile user stories with Otter and ChatGPT, and rapidly turning those insights into working app prototypes using Figma Make. You’ll learn to mine your own thoughts for powerful features and pain points, map these to real user needs, and supercharge your workflow with cutting-edge tools. By the end, you’ll be ready to turn any burst of inspiration into design-ready prototypes and actionable development steps.
Unlock the power of AI to supercharge your product design process! This demo guides you through capturing raw ideas via voice recordings, organizing them into agile user stories with Otter and ChatGPT, and rapidly turning those insights into working app prototypes using Figma Make. You’ll learn to mine your own thoughts for powerful features and pain points, map these to real user needs, and supercharge your workflow with cutting-edge tools. By the end, you’ll be ready to turn any burst of inspiration into design-ready prototypes and actionable development steps.
In this video, you'll learn how to transform your brainstorming sessions and unstructured ideas into actionable agile user stories using AI tools and Otter transcription. By following the process demonstrated, you'll discover how to mine your thoughts for key features and pain points, then organize them into structured requirements. Viewers will see how to use these user stories to generate rapid app prototypes with tools like Figma Make and refine them for a real-world project. By the end, you'll have the methods and confidence to turn your random ideas into clear, design-ready prototypes and workflows.
Following are the key things you will be able to do after you watch this demo:
Here is the template you can clone to define your app.
Click here to get the ultimate prompt cheat sheet of every prompt used end to end.
Click here to get the 10 step workflow summary guide and supplemental resources.
AI-Driven Prototype Development Process 0:09
Josh Lomelino explains the process of creating AI-driven prototypes using tools like Figma, Proto.io, and others.
The goal is to create a template that can be integrated into manual prototypes, eventually leading to a full app experience using tools like Lovable or Bubble.
Emphasis on the importance of a clear product definition and agile user stories for successful AI development.
Josh demonstrates how to train a chat on app features and user stories, using his app "Reclaim You" as an example.
Training ChatGPT for User Stories 4:30
Josh shows how to train ChatGPT on audio brainstorming sessions using Otter for transcription.
He explains the process of exporting SRT files from Otter and using them as inputs for ChatGPT.
The goal is to capture random thoughts and ideas, which AI can then organize into structured user stories.
Josh demonstrates how to ask ChatGPT to learn from the audio files and generate actionable insights for app features and user stories.
Data Mining and Feature Identification 10:13
Josh discusses the importance of data mining and research to identify core pain points and features for the app.
He shows how to ask ChatGPT to create lists of pain points, issues, and challenges from the data set.
The process involves categorizing pain points into broad buckets like health and wellness, planning and process, motivation and mindset, teaching and engagement.
Josh emphasizes the need for a clear understanding of pain points to develop effective product solutions.
Generating Agile User Stories 17:52
Josh explains how to use ChatGPT to create detailed agile user stories based on the identified pain points.
He demonstrates the process of training ChatGPT on the framework of pain to solution for creating user stories.
The goal is to generate a comprehensive list of user stories that can be used to guide the development of the app.
Josh shows how to create personas for different user groups and generate user stories for each persona.
Prototype Generation with Figma Make 25:43
Josh introduces Figma Make as a tool to generate prototype screens based on the agile user stories.
He explains the process of describing the app in Figma Make, including the app store description and features.
The tool generates HTML code for the prototype screens, which can then be manually refined.
Josh emphasizes the importance of using multiple tools and integrating their outputs to create a comprehensive prototype.
UI Framework and Stencils 35:30
Josh discusses the importance of selecting a UI framework for the final app experience.
He demonstrates how to use UI kits like Bootstrap UI and Material UI to create a consistent UI workflow.
The goal is to ensure that the prototype screens match the final app experience as closely as possible.
Josh shows how to use stencils to quickly create UI elements and save time in the development process.
Reviewing and Refining the Prototype 45:41
Josh explains the importance of reviewing and refining the prototype to ensure it meets the project requirements.
He demonstrates how to identify and fix broken links and other issues in the prototype.
The process involves iterating on the prototype, incorporating feedback, and refining the UI elements.
Josh emphasizes the need for a clear and accurate input to get the best output from AI tools.
Final Steps and Best Practices 46:18
Josh outlines the final steps in the AI-driven prototype development process.
He emphasizes the importance of saving chat history and project documentation for future reference.
The goal is to create a comprehensive and accurate prototype that can be used as a starting point for the final app development.
Josh encourages the use of multiple tools and integrating their outputs to create a robust and functional prototype.
Creating Multilingual Videos
After completing this video, viewers will know how to create and translate audio and video content into multiple languages using advanced AI-powered workflows. They will be able to generate synchronized lip-sync performances, dubbed audio, and accurate subtitles for up to 36 languages, ensuring a seamless user experience for international audiences. Users will also be able to integrate these multilingual assets into platforms like Amp, allowing viewers to easily switch between languages. This process empowers content creators to efficiently mass produce and manage localized video content for diverse learning environments.
Following are the key things you will be able to do after you watch this demo:
Click here to see and get each tool used in this demo.
Examples Shown in this Demo
Overview of Multilingual Video Creation 0:08
Josh Lomelino introduces the demo overview for creating multilingual videos and integrating them into Anomaly Amp.
Discusses the three methods for delivering multilingual content: audio-only, translation services, and advanced methods.
Highlights the advanced method's ability to generate performances with audio and video in sync in multiple languages.
Mentions the demo will focus on the advanced method, which offers the best user experience.
Preparing the Source Material 3:55
Josh emphasizes the importance of using a high-quality WAV file for the best translation and quality.
Demonstrates the process of preparing the source material, whether it's live or generated.
Explains the steps involved in exporting the audio file as a WAV or MP3.
Discusses the benefits of using a WAV file for better translation and quality.
Translation Process Using 11 Labs 7:08
Josh explains the translation process using 11 Labs, which provides the best translation and vocal performance.
Details the steps for creating a dubbing project in 11 Labs, including specifying the source and target languages.
Discusses the benefits of using multiple speakers and disabling voice cloning for better performance.
Demonstrates the process of uploading and translating an audio file using 11 Labs.
Spot Checking Translations 13:29
Josh shows how to spot check translations using AI translation services if a full translation team is not available.
Explains the process of exporting the translated audio file and re-translating it back to English for validation.
Highlights the importance of having a review team to ensure accuracy.
Discusses the steps for implementing multiple languages into Anomaly Amp.
Advanced Method Demonstration 21:05
Josh demonstrates the advanced method, which generates performances with audio and video in sync in multiple languages.
Explains the sequential process of preparing the source material and translating it using 11 Labs.
Discusses the benefits of using a digital double for creating multilingual videos.
Demonstrates the process of uploading and generating the translated video file.
Integrating Multilingual Videos into Anomaly Amp 28:08
Josh explains the process of integrating multilingual videos into Anomaly Amp.
Discusses the options for switching between languages on the fly.
Demonstrates the steps for creating a new page in Anomaly Amp and uploading the multilingual video.
Highlights the benefits of using Vimeo's advanced tools for managing multilingual videos.
Handling Subtitles and Closed Captions 35:00
Josh discusses the options for handling subtitles and closed captions in multilingual videos.
Demonstrates the process of adding subtitles and closed captions in Vimeo.
Explains the benefits of using AI translation services for generating subtitles.
Highlights the importance of ensuring the subtitles and closed captions are accurate and synchronized.
Implementing Multiple Language Pages 58:30
Josh explains the process of creating multiple language pages in Anomaly Amp.
Discusses the benefits of having a separate page for each language.
Demonstrates the steps for creating and linking the multiple language pages.
Highlights the importance of organizing the content based on the target audience's language preferences.
Text Translation and Localization 59:19
Josh discusses the importance of text translation and localization for multilingual content.
Demonstrates the process of translating text using Google Translate.
Explains the benefits of having a review team to ensure the accuracy of the translated text.
Highlights the importance of localizing the entire site for a seamless user experience.
Architecting the Multilingual Experience 1:04:46
Josh discusses the different ways to architect the multilingual experience in Anomaly Amp.
Explains the benefits of having a separate class for each language.
Demonstrates the steps for organizing the content based on the target audience's language preferences.
Highlights the importance of choosing the best method for delivering multilingual content.
There are no Main Site search results.