Discussions
animation
Is it possible for me to create my own animation drivers?
About real-time avatar API
Does the real-time avatar API support multiple sessions?
Or is there only one session for one API key?
user has no permission for custom driver (Animation Endpoint API)
Hallo, i'm wishing you a great day!
Video generating time in regards to upgraded paid plan for api
Currently using API free trial but I am interested in upgrading to a paid plan. My use case is i am creating a self avatar using gpt 4 turbo, elevenlabs in which i have cloned my voice. Then using the audio url and image to parse into DI-D api to generate a video for the response. I am currently toying with clips as my application is a chat format. I am wondering if I upgrade the api plan will I see faster results in video generating?
user has no permission for clips:write after paying membership to Build.
I've paid the membership to the Build API plan. When I use the API Key to create a /clip I get:
elevenlabs custom voice
Hi there, I can't figure out how to pass an elevenlabs custom voice to D-ID.
Interactive Ai Agent that uses GPT4 for responses
How can I create an "Agent" that I can talk with in real time and have conversations about PDFs of Texts I upload as a Knowledge Base?
failed to upload image
All images will not be uploaded. I tried uploading size, face size, person picture, etc, but failed.
ElevenLabs Instant Cloning Voices
I have created an application around using DID for AI Assistants and when I talked to a DID representative earlier this month they said to look forward to being able to use instant cloning from elevenlabs as long as you have an account and are paying for the DI-D api. I havent seen any updates in the hub on this feature. I am needing it and wanting to have it generated all within the did api without having to directly use elevenlabs api and get that url. Please help.
Fluency and Padding
We want to be able to produce videos with multiple phrases in the script and have the avatar interpolate at beginning and the end of each phrase so that we can manipulate the play order without the video looking jerky. As far as I can see the fluency option is only available at a whole video level so we would have to produce a video for each phrase (some of which would be very short) and edit them together into one video. Am I correct or is there a feature in the api to return the avatar to a default position after each phrase (Synthesia have this feature)? If not it would be a great feature for future development.