Big Tech's AI Compliance Testing Reveals Security and Bias Issues

Reported about 1 month ago

Recent testing of major AI models by firms like OpenAI, Meta, and Alibaba highlights shortcomings in compliance with the EU's upcoming AI Act, particularly in cybersecurity resilience and discriminatory outputs. The evaluations, conducted by Swiss company LatticeFlow AI, showed that while many models scored well overall, specific concerns were raised around biases and security vulnerabilities. As the EU advances its regulatory framework, companies are urged to address these gaps or face significant fines.

Source: YAHOO

View details

You may also interested in these wikis

Back to all Wikis