Physical Address
Kampala, Uganda
Physical Address
Kampala, Uganda
The docudrama, Einstein and the Bomb, is a film that explores Albert Einstein’s involvement in the creation of the atomic bomb and the moral dilemma he grappled with following the disastrous impact of its use in 1945. Einstein remarks, about scientific advancements, that “every important advance brings new questions”.
One such important advancement today that continues to bring new questions is generative artificial intelligence (GenAI / AI), which has seen explosive growth over the past year. The extraordinary potential AI has to improve and revolutionize various aspects of our society as we know it cannot be overemphasized. At the same time, there are grave risks attendant to AI, the materialization of which could set us on a radically destructive path.
Last week, OpenAI launched a text-to-video AI model that can create what it refers to as “realistic” videos from text instructions. This article reflects on this critical advancement, highlights the potential risks of GenAI and makes a case for a regulatory framework on the use of AI in Uganda.
Sora
While Sora is not available to the public yet, the demonstrations of its capabilities are indeed eyebrow-raising. Sora can turn text prompts or images into 1-minute-long videos based on a prompt it has been given. This prompt may be a text prompt or an image prompt.
OpenAI notes that the model has a deep understanding of language which enables it to accurately interpret prompts and generate “compelling” videos with “vibrant” emotions. The model has been trained to generate complex scenes with complex characters based on an understanding of how these scenes work in real life as we know it.
While Sora is a truly remarkable advancement, with numerous potential benefits for various sectors such as the arts & entertainment, its potential for abuse is immense – if it is deployed without sufficient guardrails.
The Case for a Regulatory Framework
GenAI continues to heighten the risk of fraud and misinformation at scale. One of the manifestations of the risks of GenAI is deepfakes – technology that can be used to alter videos, images, voices, and texts in documents, taking fraud risk to new heights, especially in the financial services sector. For example, CNN recently reported a story of a finance worker who was tricked into making a USD 25 million payment by fraudsters using deepfake technology. (Full story available: here)
OpenAI indeed acknowledges the risks with GenAI notes that while it will implement safety safeguards such as C2PA metadata which would flag a video generated by Sora, it is unable to detect “all the ways” in which people will abuse it.
It is in OpenAI’s acknowledgment that the case for a regulatory framework to govern the deployment and use of AI is made out. Certain jurisdictions such as the U.S. and the E.U. have made significant progress towards achieving regulatory frameworks.
The draft E.U AI Act (the “Act”) has adopted a risk-based approach to regulation and has classified AI systems in 3 categories based on the level of risk they pose. The AI systems deemed to have the potential to cause the most harm such as those that impair decision-making, exploit vulnerabilities, or perform social scoring, among others are outrightly banned. The Act also proposes minimum standards for high-risk and low-risk AI systems. Importantly, the Act proposes to apportion responsibility to the developers of the AI models to implement governance processes before putting these systems on the market.
A similar approach may be adopted for the financial services sector in Uganda. The responsibility for implementing governance processes should however extend to the downstream users of AI systems – such as financial services providers – to provide balanced protections for the public in so far as individual risks posed by AI such as breaches of privacy are concerned, while enhancing the security posture of the sector from external attacks and risks.
Conclusion
Regulation has always played catch-up to technology. However, given the threats that AI poses, there is an urgent need for a society-wide proactive approach to putting in place safeguards to these threats to avert the unintended consequences of innovation.
While reflecting on the deployment of the atomic bomb in Japan, Einstein remarked that he was motivated by the pursuit of peace but if he had remained acutely aware of the potential consequences, he would have adopted a different course. We may not be able to alter the course of the development of AI but we can proactively implement safeguards to minimize the harm and risks it poses.
Disclaimer: “The views and opinions expressed on the site are personal and do not represent the official position of Stanbic Uganda and Khulani Capital.”