AI model security startup HydroX AI has lately partnered with Meta and IBM to ensure safety of generative AI models in high-risk industries such as healthcare, finance and law. The collaboration aims in creating benchmark tests and toolsets for businesses in evaluating the safety of their language models.
The California-based HydroX AI was founded in 2023. It has come up with an innovative evaluation platform that can help businesses to test their language models for safety and security. The company claims that the industry lacks the required tests and tools that can ensure AI models are safe for use in high-risk industries.
Partnering with Meta and IBM is significant for the startup as both the companies have extensive experience in AI safety. Meta has developed tools like Purple Llama for secured deployment of its AI models. IBM promises to publish safety measures it takes when developing foundation models. Moreover, the two companies are founding members of the AI Alliance.
HydroX AI will contribute its evaluation resources to the AI Alliance. It will work alongside other member organizations such as AMD, Intel, Hugging Face and a couple of universities like Cornell and Yale. The basic aim is to create a comprehensive framework for evaluating AI models to ensure safety, effectiveness and ethical for domain-specific applications.
Each domain is equipped with its own challenges and requirements. Hence, it is important to evaluate large language models and ensure safety as well as effectiveness for industry-specific applications. The primary goal is of course to strengthen trust as well as facilitate broader adoption.
The collaboration between the three companies highlights why collaboration is required in addressing safety and security of AI models. HydroX AI chief of staff Victor Bian said that they have recognized the need to address AI safety and security.