Monitor inputs, generate extended clips, merge outputs, log progress, and notify on completion.
The AI agent reads inputs from Google Sheets, generates longer UGC clips with Kling 2.1, and merges them into a seamless video. It saves outputs to Google Drive and tracks progress automatically. It then publishes to selected social platforms, updating the control sheet in real time for full visibility.
Performs end-to-end video extension, merging, and publishing.
Ingest inputs from a Google Sheet and extract the last frame of each source video.
Generate an extended clip using Kling 2.1 via RunPod based on the provided prompt and duration.
Merge the generated clip with the original to create a seamless extended video.
Upload the final video to Google Drive for storage and access.
Publish automatically to YouTube, TikTok, Instagram, Facebook, X, and LinkedIn via integrated posting services.
Log status and URLs back to the Google Sheet for tracking.
This AI agent consolidates a fragmented workflow into a single orchestrated process, replacing manual handoffs with automated state tracking. It reduces risk by enforcing a consistent extension and merge sequence, and it provides auditable logs for every run. By centralizing inputs, generation, merging, storage, and publishing, teams can scale production without increasing manual effort.
A simple, three-step flow that non-technical users can follow.
Read inputs from Google Sheets and extract the last frame from each video using Fal.ai.
Send the frame and prompt to Kling 2.1 on RunPod to generate an extended clip of the requested duration.
Merge the extended clip with the original, upload to Google Drive, and publish to social platforms via Upload-Post and Postiz.
One realistic scenario.
Scenario: A creator submits a 15-second UGC clip with a 60-second extension prompt. The AI agent processes the inputs, extends the video, merges it, uploads to Drive, and posts to TikTok and YouTube within approximately 8–12 minutes.
Ideal users across teams and roles.
Needs scalable, repeatable UGC extension and posting.
Coordinates multi-platform publishing without manual steps.
Automates routine editing-like steps to extend clips.
Manages multiple campaigns with centralized publishing.
Delivers scalable video production with lower effort.
Self-serve automation for faster delivery.
Connects cloud storage, AI models, and publishing platforms.
Frame extraction and video merging via FFmpeg.
AI video generation based on prompts and duration.
Publish to social platforms (multi-platform posting).
Publish to YouTube via Upload-Post.
Inputs, status tracking, and URL logging.
Store final videos and accessible URLs.
Concrete scenarios that benefit from automation.
Common questions about setup and operation.
The AI agent automates the end-to-end process of extending UGC videos, merging clips, and publishing to Drive and social platforms using Kling 2.1 and Fal.ai. It reads inputs from Google Sheets, handles frame extraction, generates extended content, performs merging, uploads outputs, and posts to the selected networks. The system includes status checks and logging to keep you informed. It requires configuration of API keys and OAuth credentials but operates without manual intervention after start.
Provide the source video URL, a prompt for extension, and the desired duration in the Google Sheet. The agent will read these inputs, extract the last frame, and generate an extended clip accordingly. It then proceeds through merging, storage, and publishing steps automatically.
The agent can publish to YouTube, TikTok, Instagram, Facebook, X, and LinkedIn via integrated services. It also stores final outputs in Google Drive for easy access. Publishing is configurable per run, allowing selective platform posting.
Each external API is checked with status loops and retry logic. If a step fails, the agent logs the error, updates the control sheet, and safely retries or moves to the next item depending on the failure mode. Alerts can be configured to notify your team if manual intervention is required.
Yes. The prompt can be adjusted in the RunPod Kling 2.1 node and the surrounding workflow to modulate duration, style, and transitions. Changes apply to subsequent videos; existing runs are not retroactively updated.
The workflow uses OAuth2 for Google services and API keys for external services. Access is restricted to configured accounts, and keys are stored securely in the integration layer. Audit trails are maintained in the Google Sheet, and sensitive data is not exposed in logs.
Yes. The architecture is modular: you can add more API nodes, editors, or triggers to expand the pipeline. Each addition follows the same pattern: read input, generate content, merge, store, and publish with status logging.
Monitor inputs, generate extended clips, merge outputs, log progress, and notify on completion.