Researchers from software supply chain security firm Rezilion have investigated the security posture of the 50 most popular generative AI projects on GitHub. They found that the more popular and newer a generative AI open-source project is, the less mature its security is. Rezilion used the Open Source Security Foundation (OpenSSF) Scorecard to evaluate the large language model (LLM) open-source ecosystem, highlighting significant gaps in security best practices and potential risks in many LLM-based projects. The findings are published in the Expl[AI]ning the Risk report, authored by researchers Yotam Perkal and Katya Donchenko.

To read this article in full, please click here

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

We don’t spam! Read our privacy policy for more info.

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox, every month.

We don’t spam! Read our privacy policy for more info.

By rooter