Few debates have raged longer and more controversially in the computing industry than one: Is “open source” better than “closed” when it comes to software development?
This debate was revived as companies like Google, Meta, OpenAI, and Microsoft differed on how to compete for supremacy in artificial intelligence systems. Some choose a closed model while others embrace an open approach.
Here's what to know.
What does open source software mean?
Source code is the underlying building blocks of the apps you use. Developers can write tens of thousands of lines of source code to create programs that will run on a computer.
Open source software is any computer code that can be freely distributed, copied, or modified for the developer's purposes. The nonprofit Open Source Initiative, an industry organization, sets other provisions and standards for what software is considered open source, but it's largely a matter of the code being free and open for anyone to use and improve upon .
What are some examples of open source software?
Some of the most popular software systems are open source, such as Linux, the operating system on which Google's Android mobile system was built. Among the best-known open source products is Firefox, the free downloadable Web browser created by the Mozilla Foundation.
So what is the open and shut debate and how does this relate to AI?
Tech companies like Google, OpenAI, and Anthropic have spent billions of dollars creating “closed,” or proprietary, AI systems. People who don't work for these companies can't see or tinker with the underlying source code, nor can customers who pay to use it.
For a long time this was not the norm. Most of these companies have open sourced their AI research so that other engineers can study and improve the work. But when tech executives began to realize that the search for more advanced AI systems could be worth billions, they began to block their research.
Tech companies argue that this is for the good of humanity because these systems are powerful enough to potentially cause catastrophic social harm if placed in the wrong hands. Critics say the companies simply want to keep the technology away from hobbyists and competitors.
Meta took a different approach. Mark Zuckerberg, CEO of Meta, has decided to open source his company's large language model, a program that learns skills by analyzing large quantities of digital text collected from the Internet. Zuckerberg's decision to open source Meta's model, LLaMA, allows any developer to download and use it to create their own chatbots and other services.
In a recent podcast interview, Zuckerberg said that no single organization should have “truly superintelligent capabilities that are not widely shared.”
Is it better open or closed?
It depends on who you ask.
For many technologists and those who embrace hardcore hacker culture, open source is the way to go. Software tools that can change the world should be freely distributed, they say, so that anyone can use them to create interesting and exciting technologies.
Others believe that artificial intelligence has advanced so rapidly that it should be kept closely by the creators of these systems to protect it from abuse. Developing these systems also costs enormous amounts of time and money, and closed models should be paid for, they say.
The debate has already spread beyond Silicon Valley and computer enthusiasts. Lawmakers in the European Union and Washington have held meetings and taken steps towards AI regulatory frameworks, including the risks and benefits of open source AI models.