3 Important Reasons Why Investors Should Care About GPT-3

GPT-3

GPT-3 may sound like yet another obscure technology announcement, however this powerful new language generator is a momentous step-change that will impact business and society in profound ways – and in the near future.

From a single sentence, or even a few words, it can generate a full five, well-written paragraphs. “I’ve been shocked when I’ve seen it,” says Munye, “it’s hard to distinguish from a human in terms of creativity.”

At its most simplistic, GPT-3 is a tool that allows non-AI experts to design a range of linguistic solutions, using English, rather than complicated code. While an aspect of this powerful platform is available to a few testers at the moment, its potential applications are already legion: it can write articles, medical, financial and legal text, mimic writing styles precisely, compose poetry, act as a therapist, create clever chatbots, write code for an application, design interfaces and generate infinite ideas – all based on a few words in English from the person interacting with it – and this is just the beginning.

GPT-3 has been praised for being the most powerful model in the world, with its human-like capacity to think and reason, executing a variety of tasks, with minimum training required. There is no doubt that this technology will have major implications for business and society, alike. Below, we’ve set out the most compelling ones.

In less than two years, OpenAI (GPT-3’s creator) has managed to create a language model that has over 100 times more parameters than its predecessor. Not only does the model utilize an eye-popping 175 billion parameters*, it has been trained on 45TB of text data, an unprecedented amount.]

 

  1. Business implications
  • GPT-3 will impact many of the digital transformations currently underway at most organizations. Not only will it do away with many solutions being developed inhouse by organizations (that can now be more efficiently undertaken by GPT-3), but the whole process of ideation, app development and programming will become more streamlined using GPT-3, or any of its future cohorts. Depending on how OpenAI sells access to its interface or underlying code, it will likely become a platform for rent, say, an “AI as a Service”, obviating the need for companies to create their own
  • Assuming that OpenAI does rent out GPT-3, it will prove a considerable enabler to levelling the playing field between small entrepreneurs or start-ups, and larger organizations. Access to such a platform does away with the need to write one’s own algorithm, then train it with well-scrubbed, massive datasets, eliminating a larger business advantage
  • It’s inevitable that other research groups, state actors, or corporations will replicate the scale of GPT-3 in coming months. When that happens and GPT-3 equivalent models are commonplace, big technology firms that rely on algorithmic newsfeeds will have to reassess the way they deliver and promote content. Its impact on voice-activated assistants will also be considerable

 Leaving OpenAI in charge may not be a long-term solution, however. “Anytime a tech company becomes a content moderator it ends badly, that’s the general rule,” says Turan, “because you are consolidating moral authority into a company.” It’s not a question of whether the people who run OpenAI are good, moral people, it just gets a little tricky when these decisions are made by a commercial entity (OpenAI shifted from a non-profit to “capped-profit” company last year). 

  1. Societal implications
  • GPT-3 is considered a ‘general purpose technology,’ with abilities starting to approach the human standard long feared: Artificial General Intelligence – a reality that experts thought was largely unforeseeable. Undoubtably GPT-3 has many limitations and corrections to go, however the speed of its progress is mind-boggling, and places a hypothetical, far-off possibility, as much more realistic and near-term probability
  • The immense dataset GPT-3 is trained on is a historical one, inherently carrying biases from the past, that are therefore very much present in its current output. OpenAI recently launched a filter which rates content created by GPT-3 on a toxicity scale, flagging some for moderation.1 This is by no means a broad-scale solution, as has been acknowledged by other tech platforms with similar issues
  • GPT-3 has the ability to replace entire job categories that were thought to be safe for at least a few decades. Software developers, journalists, and creative writers – just to name a few – may become largely irrelevant and society will have to invest heavily in reskilling these individuals
  • GPT-3 highlights the dangers in chasing a reskilling cycle that is slower than technology’s advancement, thereby potentially training people for jobs that are no longer available by the time workers have been ‘reskilled.’ Case in point is GPT-3’s likely contribution to a decreased need in an area of current reskilling focus: programming. It’s worth noting that GPT-3 is only on its third iteration, and an early example of transformer algorithms, which are likely to develop and proliferate much further
  • With GPT-3’s capacity to generate countless articles, almost instantaneously, there is significant potential for harm through a barrage of fake news. With the model using all internet-based data sets – including fake news – to “train” itself, this can create a sobering cycle of misinformation. The fact that GPT-3 has the capability to perfectly copy and reproduce existing writers’ styles also poses the rising challenge of distinguishing ‘real’ from ‘fake’
  • OpenAI is taking a responsible approach by only releasing access to GPT-3’s interface (not the algorithm itself) to a small number of testers, to explore possible use cases. However, once widely available, any bad actors can create uses cases unforeseen by the few testers. Can OpenAI throttle this incredibly powerful tool from being weaponized, once it (and its cohorts) becomes more widely available? If so, how much resource would it take to do so?

 The [OpenAI] lab was supposed to benefit humanity. Now it’s simply benefiting one of the richest companies in the world

  1. Governance Implications
  • OpenAI was initially established as a not-for-profit, whose mission was: “to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.” The people who worked there were ostensibly there to indeed, benefit humanity. Yet a year ago, OpenAI unexpectedly changed their corporate structure to a complex ‘hybrid’ (capped) for-profit, in order to ‘commercialize’ GPT-3, and in no small part, to pay and retain talent, up to Silicon Valley standards. As their mother company remains a nonprofit, with no shareholders (and an unknown voting structure – in fact it shares remarkably little information about itself), there is nothing to prevent them from morphing yet again in the future
  • OpenAI LP (the entity monetizing GPT-3) is governed by the board of OpenAI Nonprofit, which appears to be populated by a small group of Silicon Valley insiders, the majority of whom are in their early to mid-thirties. It’s inconceivable that this small number of homogenous, young people govern one of the most powerful platforms in the world, with no regulatory oversight or external checks and balances
  • OpenAI has just inked an exclusive deal with Microsoft, where not surprisingly, one of its original funders and board member, Reid Hoffman, is also on the board of Microsoft. While OpenAI will continue to offer its interface to chosen users, only Microsoft will have access to GPT-3’s underlying code, allowing it to embed, repurpose, and modify the model as it pleases. Is this really in humanity’s best interest or is there a potential conflict of interest?
  • As the content moderator for this powerful tool, OpenAI sits in a similar unenviable position as that of social media platforms Facebook, Twitter and Google, albeit as a privately held company. Yet it’s unclear how or who will determine its policies for content moderation, considering its incredible influence on external stakeholders

CFU believes that technological innovation is important and should be stimulated, but never to the detriment of society, transparency and accountability, most expressly when it comes to technology that is an influential, foundational, public utility, such as GPT-3 is set to become. In the current absence of a publicly selected board, regulations, or any external oversight, can we rely on the ethics of a small group of insiders, in a private business, to reign in this unimaginable power?

*Parameters are network calculations that apply particular weights to different aspects of data,  it has been trained on 45TB of text data, an unprecedented amount.