Think fast, video AI based on keywords is just around the corner. If people can, they will - drop manual animation and native rendering for a complete solution at a fraction of the cost. This is perfectly fine, no worries of the possibility of users ditching ic8 or cc4 in the future, because you already have the infrastructure/labels in ic8 and cc4 to guide integrated AI using prop and character position, contents, time, size, angle, character parts, facial expression, wrinkles etc - use them to your advantage for the AI.
I highly recommend that you work with Sora, create a snapshot and tagging system for character parts and clothing so users can tell Sora to skip detection and instead rely on the tag description or both.
Artists will still need to create character designs and objects in cc4 or ic8 can be used as a reference for the ai to set distance and identify what the content is - Cc4 and iClone can be a powerful Ai aid were simply moving things into position are more helpful than writing text to explain it!
I foresee iClone becoming a powerful AI director - if development starts now.
The plugin could include the project summary/story whereas everything in the scene has labels, positions, scale etc.
A single object could be tagged as a flock of birds, maybe in the future, a program like Sora is available as the viewport and can be toggled to regular geo.
If Sora isn't capable of real time yet then at least a toggle to render still image reference would still be very helpful.
This solution might help users skip the burden of learning other software such as omniverse or UE to do necessary tasks such as requiring a renderer, stay mostly within iclone/cc4.
Apparently it was suggested that Sora could become the main driver for UE renders so if scenes were easily setup and animated in iClone, the UE bridge could also benefit from creators.
A while back I suggested a photobooth plugin for cc4 to automatically capture all body parts/angles necessary to automatically create 2d characters for CTA, Once again, this Sora plugin could be used to capture cc4 and CTA characters to be used in Sora film building where all information is sent to Sora - no need to type everything out, even the hobbyist would now be able to create great looking films!
Additionally I hear an issue from VFX forums, that Video AI is destructive meaning if you type the exact same thing in the prompt, it wont generate the scene exactly the same. I think capturing the particular AI seed would be essential but I think if the props and characters already have labels, this will help bring consistency and work like Photoshop layers.
Other than labels I think a water gizmo zone tool is needed so users can position and scale influence for specific areas - director may ask for larger wave crashes against a cliff, or building. You may also add a weakness value to a primitive or another gizmo zone in the label to designate it as a skyscraper that will fracture.
iClone could help Sora and vice versa if Sora creates seed values to be added to prop labels and a thumbnail so it can identify and keep structure and material style - it would help towards creating non destructive workflows.