About Anam
We’re on a mission to humanise technology by developing real-time, AI personas who feel as natural as interacting with a human.
Why? How we interact with technology today has tonnes of friction, it isn’t truly intuitive or interactive. We’ve defaulted to a 1-many approach because of the limitations of tech rather than a 1-1 personalised approach, relevant to the individual, not the masses.
Our vision is to build the next computer interface, one that mimics how we have communicated for millennia by developing AI personas who feel as natural as interacting with a human. Our personas look and chat with you like humans, speak more than 50 languages, will understand the subtleties of human emotion and expression and will respond in real-time. No product like this exists today because until now, it’s been practically impossible.
We started this journey <9 months ago. Since then, we have grown the team to 6, developed our product, brought on more than 30 design partners we work with closely and now we’re launching our MVP this September. We’re backed by leading teams from Concept Ventures, Torch Capital and angels from Elevenlabs, Spotify, Sonantic and Speechly.ai. We've already been named a top UK start-up to watch out for.
What we've built in the last 6 months
- An API users can integrate with their website. Click here to see the product in action, our v-one personas <3 months work.
- Click here for a demo of the Anam UI lab where anyone can create a persona from scratch in less than 2 minutes, chat with them in real-time in their browser and then share or deploy them on a website for users to interact with them.
- Click here to see the early results from our face-generation model. This will be the basis of our v-two personas available in September.
Info pack I’ll share with leads
- Try our first demo (<6 months work)
- Deck - will give you an idea of what we're building longer term (use cases on slides 7 and 8). Slide 11 you'll see more on the market too.
- Linking a video to show how the v-two personas are progressing -- this video shows the level of photorealism we should get with our new model. To explain the process at a high level:
- we fine-tuned our model on footage of this actor
- then feed the model a single source image of the actor
- tasked the model to recreate the persona with the audio from the original footage
- the video is the output of the model - this should be the quality of output when we deploy it into our product
- A Caveat here - the lip sync and expressivity won't be as good as this video for the MVP launch as that's a different part of the problem that we're still solving.
- For our MVP launch
- Free demo on the website so anyone can chat with a persona, capped at 5 minutes
- Paid plan (flat fee plus overages) restricted by concurrency
- Gives access to the anam.lab
- This allows a user to choose from 6 stock personas created from our new V-two model
- A user can pick from several voices and backgrounds. They then add context to the pre prompt giving the persona a personality and use case
- User can chat with the persona in the browser and share it
- User can also deploy the persona then through API SDK to their website/App --> This is the only point a user needs to be technical
- Other items coming soon after the MVP launch
- go live with 2 more paid plans
- push our one-shot persona model to the product. This will give users the ability to create a persona from a single picture. The step after that is to build for full customisation of any persona's appearance (hair, skin colour, clothing, background).