
British university spinoff Mindgard protects companies from AI threats
AI creates a dilemma for companies: Don’t implement it yet, and you might miss out on productivity gains and other potential benefits; but do it wrong, and you might expose your business and clients to unmitigated risks. This is where a new wave of “security for AI” startups come in, with the premise that these threats, such as jailbreak and prompt injection, can’t be ignored.
A spinoff from a British university, Mindgard is one such startup. The company’s CEO and CTO, Professor Peter Garraghan, emphasizes that AI is still software, so all cyber risks that are applicable to regular software also apply to AI. However, the unique nature of neural networks and systems justifies a new approach.
In order to tackle these threats, Mindgard has developed Dynamic Application Security Testing for AI (DAST-AI) that targets vulnerabilities that can only be detected during runtime. This involves continuous and automated red teaming, a way to simulate attacks based on Mindgard’s threat library. For instance, it can test the security of an AI model by trying to manipulate its responses or outputs.
The startup has now secured $8 million in a new funding round, led by Boston-based venture capital firm .406 Ventures. This brings the total amount raised by Mindgard to £3 million seed round in 2023 and $8 million in this current round.
This fresh injection of funds will be used for expanding the team, further developing their product, conducting research and development, as well as moving into the United States market. The company plans to keep its R&D and engineering activities based in London while establishing a marketing presence in Boston.
Mindgard’s headcount is currently 15 people, but it plans to reach 20-25 employees by the end of next year. Garraghan believes that AI security “is not even in its heyday yet,” and that when AI does start getting deployed everywhere, and security threats follow suit, Mindgard will be well-prepared.
The company’s ultimate goal is to ensure that people can trust and use AI safely and securely, according to Garraghan.
Source: techcrunch.com