🎉 Redactive just raised $11.5million to turbo-charge our Responsible AI platform for Enterprises
Read More

Why we raised $11.5 Million to turbo-charge the delivery of our enterprise-grade developer platform for Responsible GenAI adoption

Andrew Pankevicius
Written by:
Andrew Pankevicius
Why we raised $11.5 Million to turbo-charge the delivery of our enterprise-grade developer platform for Responsible GenAI adoption

Over the last 8 months, Redactive has quickly grown to a 10 person team with multiple financial services institutions as paying customers trusting our developer platform to get their Generative AI use cases from pilot to production. Today, we are excited to announce our $7.5 million USD ( $11.5 Million AUD) seed round co-led by Felicis and Blackbird, alongside Atlassian Ventures and automation unicorn Zapier.

The funds will assist us to rapidly grow the engineering, marketing and customer success team, along with scale our operations to the US (where we are already actively working on a number of pilots). Continuing to build a world class team is vital to our mission to enable enterprises to leverage their existing software teams to build, deploy and responsibly govern their own intellectual property that leverages generative AI.

Providing Enterprises with a GenAI capability, not just a single use case

Redactive is focused on ensuring even the most regulated enterprises have the GenAI capabilities to build & maintain their own Generative AI use cases/applications from scratch. Unlike Co-pliots and off the shelf AI tools you hear so much about, Redactive bolsters the capabilities of an organisation’s software teams so they can create their own GenAI intellectual property for their customers & employees.

To achieve this mission, Redactive’s developer platform solves for the most complex problems that traditional software engineers are struggling to navigate in an AI focused world. We don’t replace your existing software engineers or automate their job. Instead, we solve for the emerging & lucrative ‘data engineering’ skillset that is table-stakes for an AI application to function.

We help keep engineers focused on building great user experiences, whilst we solve for new challenges that exist with implementing a LLM into your application stack, including data pipelining, data access control, auditing , AI governance and query performance.

Our journey so far

We started by tackling this vision by focusing on a problem of allowing LLMs to perform live permissioned data access. Businesses we regularly met showed us high value AI use cases, from AI Agents, Conversational AI & application pre-processing for insurers, but they were unable to get them into production.

All of these experiments followed the same structure: an interface, an LLM, a vector database, an embeddings model, and an orchestrator service to do chunking, that was performing retrieval augmented generation.

The pattern was widespread, but the complaints were common:

  • There was significant investment required to keep the data up to date, data pipelines regularly needed to update the data in database so data didn’t get stale. Worse, regulated enterprises wouldn’t accept ‘another source of truth’ for a copied document as it would need to be ‘audit ready’ at any time.

  • Moving the data to make it usable for GenAI stripped it of its fine-grained permissions, decoupled the data from its source. In short, its almost impossible for companies to retrospectively re-apply permissions that reflected the source for each user.

These challenges taught us that:

  1. AI applications are only as good as the data they have access to.
    For most enterprises the best data is both permitted and decentralised across the organisation.

  2. True next-generation AI applications require secure & scalable access to all of a company’s information to drive a real step-change in productivity.

  3. GenAI was too big an opportunity for a security team to block but too big a threat for security team to ignore. We needed to enable organisations to get to production.

Redactive allows for LLM powered applications & Agents to connect to a variety of enterprise data sources while enforcing permissions live at query time, so that only the right user can see information they have access to.

This led Redactive to expand its vision to become the AI engineer for enterprise teams, delivering a platform that enabled organisations to easily access data live from the data source rather than running complex data pipelines that created substantial security risks. But thats just the start for us…

Next: A wholistic, Responsible AI platform for Enterprises

Redactive AI is actively used in two large, Australian financial services organisations as their Responsible AI platform of choice, helping roll out GenAI use cases to over 500 employees each, along with frontier technology companies building AI native applications for security conscious enterprises.

The more people used these experiences, the more we saw executive teams see Redactive make us a strategic part of their Responsible AI strategy. We decided to lean into these needs and formed a vision to help organisations deliver AI responsibly. We found two key groups outside of engineering that we needed to expand our product into:

  • Security teams needed to understand whether the data permissions that they had built into their services had any gaps

  • Leaders and mangers needed a place to manage their organisations adoption of LLMs and their safety obligations

Resolving technical blockers are far simpler that these organisational challenges. These are human centric problems that require AI agents and employees working together, with the interests of employees as the priority. These are hard problems to solve but we have deep customer buy in to fix. We have already begun to chart our vision of how to support these customers.

Redactive helps security teams remedy permission mismatches across tools using semantic AI analysis, solving a previously impossible task for enterprises looking to adopt AI safely.

Redactive is building a platform that doesn't just allow your engineering team to build powerful GenAI applications with your organisation’s data, but it also supports your security team reduce threats in your organisations and help leaders manage these applications responsibly. Redactive enables the GenAI capability inside of an organisation.

Building a world-class company

We’d stress that there really aren’t many startups in Australia focused on building developer platforms for a global audience, let alone ones focused on how enterprises solve empowering security teams to manage new data threats and support business leaders to practise Responsible AI.

We are really proud as a team to focus on huge, global challenges that are facing purposeful AI adoption and make a mark on it from our corner in the world with the support of top tier investors. For that reason, it would be no surprise to say that we are looking for top talent that are passionate about implementing responsible AI across enterprises in Australia and abroad - feel free to reach out to us here.

Passionate about implementing Responsible AI?

Get in touch with us below!