By Interconnects
Published on April 03, 2024
We’re so early in the process of defining community norms and standards around open LLMs that the goals of the people involved are not clear at all. Open source means a lot of things to a lot of different people, and this is by design. The guiding principles and rules of open source make it succeed when multiple groups dictate its path. I’ve written a lot about how an open-source LLM should be described (definitions) and what the situation is with crucial discussion points around them (e.g. bio-risk, licenses, etc), but just as important to the development of the technology is the who and the why.
Recently, MIT Technology Review released an article with a title arguing the opposite of this post — that definitions of open-source AI are problematic. The authors argue that without change in the current trajectory, big tech will define open-source AI to be what suits it. The article generally covers recent events well, but the title is another contributor to the narrative against open-source AI that I’m trying to counteract. Given that open-source projects are designed to involve many people, disagreement on definitions is part of the process and expected. Yes, we should better support bodies establishing standards, but no, we shouldn’t be surprised about drawn out disagreements.