By Shehnaz Ahmed
What happens when the unstated Silicon Valley principles to encourage innovation are extended to corporate governance of emerging-tech companies? The recent leadership turmoil at OpenAI—the creators of ChatGPT, the artificial intelligence (AI) system that took the world by storm over a year ago—involving its board’s abrupt decision to fire star CEO Sam Altman, which was then dramatically followed by his muscular reinstatement within a matter of just days (with the board getting restructured), offers some clues. Beyond l’affaire Altman’s implications for OpenAI, it raises significant questions about the corporate goverance of AI companies that could affect the future of such technologies. OpenAI’s legal structure highlights the complexities in balancing public good with commercial considerations in developing foundational technologies. The Altman episode puts the spotlight on the role of corporate governance in AI safety particularly and calls for a more nuanced approach.
However, OpenAI’s structure, while aimed at preventing unrestrained profit-driven AI growth, led to questions of unbridled power to the board, which ultimately leads to lack of accountability. OpenAI is not the only tech company exploring alternative structures to the traditional for-profit model. Rivals like Anthropic and Inflection AI are structured as “public benefit corporations” (under US laws), allowing their directors to balance shareholder interests with a public benefit purpose. Anthropic has also created a Long Term Benefit Trust consisting of independent trustees with the power to appoint and remove board members, thereby giving the trust a control over the company in the longer run.
As world leaders deliberate on an appropriate response to AI governance, AI is being developed and funded by private companies. Therefore, AI governance will also depend on the governance of entities funding and developing them. From a corporate governance perspective, the OpenAI fiasco throws important lessons for policymakers and the industry.
First, there is need to examine the role of corporate structures of AI developers. The rise of alternative corporate structures can be tracked to the idea of public benefit corporations (PBC) (recognised by many US states), which allows a company to operate with objectives of achieving social good rather than solely focusing on profit maximisation. These structures indicate a gradual move from shareholder primacy to stakeholder capitalism.
However, such structures pose legal questions in countries like India, with clear distinctions between for-profit companies and not-for-profit companies (like Section 8 companies). The limitations of the existing structures was noted by the report of the High Level Committee on Corporate Social Responsibility, constituted by the ministry of corporate affairs. The committee recommended creating social impact companies for India along the lines of a PBC with a view to drive private investment for social impact.
Second, despite a protective legal structure, investors may often have the power to exert influence on companies. In OpenAI’s case, Microsoft’s substantial investment gave it significant soft power to sway talent (as Altman and most employees were ready to join Microsoft) and thereby put pressure on the board. The ambiguity of public good goals may create dilemmas in a market-driven environment. Thus, clear communication among stakeholders remains crucial, regardless the legal structure.
Third, it must be acknowledged that the pursuit of AI safety and ethics while avoiding profit maximisation is challenging for capital-intensive projects like OpenAI. The OpenAI incident may make investors more cautious about such legal structures, emphasising the need to identify solutions for a viable business case for AI safety.
Last, the appointment of independent directors may often seem like a simple solution to balance commercial interests with public good. However, the ambiguity associated with “public good” being pursued by a company can lead to difference in the interpretation between the founders and the directors. Therefore, mere independence from investors is insufficient. There should be mechanisms requiring directors to actively pursue the company’s public good goals and subjecting them to external scrutiny are necessary for accountability. Going forward, the eligibility criteria for independent directors should encompass not only financial independence but also diverse backgrounds, competencies and expertise aligned with the company’s broader mission.
AI development is a significant technological advancement, and the pursuit of safe and ethical AI requires a delicate balance between commercial interests and public good.
Given the social implications of this technology, it is imperative that companies involved in AI development establish transparent and accountable governance mechanisms.
(The author is Fintech lead, Vidhi Centre for Legal Policy Views are personal)