Looking to 2024: What’s Next in AI? Enhancing Safety, More Regulatory Oversight and Leveraging AI for Education

Looking to 2024: What’s Next in AI? Enhancing Safety, More Regulatory Oversight and Leveraging AI for Education

December 5, 2023

IN BRIEF | 8 min read

  • This year, we could see a greater push by developers to improve the safety of AI systems and more countries addressing the need for AI regulation. Amid the rising popularity of generative AI tools, masterful usage of their full capabilities will also become more important.

One year ago, OpenAI’s chatbot ChatGPT created a sensation across the globe over its ability to generate everything from essays to e-mails and poems in a matter of seconds.

Since then, the focus has turned more broadly to generative artificial intelligence (AI) tools that can produce text, images, videos and even computer code based on user inputs through the use of machine-learning algorithms trained on massive datasets.

The immense potential of generative AI tools such as Bard, Midjourney or Stable Diffusion, however, comes with a host of concerns, including the risk of inaccuracy and misinformation, copyright infringement, potential biases, and job displacement, among other issues.

Underscoring these concerns, 28 countries and the European Union signed the Bletchley Declaration in November to cooperate on AI safety. That same month, the leadership turmoil at OpenAI sparked by the sudden ouster and eventual reinstatement of chief executive Sam Altman stunned the tech world.

So, what new developments can we expect next year? What is the outlook for AI given the fast-evolving regulatory landscape and how will it impact the nature of work and education? Professor David Tan from NUS Law, Professor Hahn Jungpil from the Department of Information Systems and Analytics at NUS Computing and Mr Jonathan Sim, a lecturer with the Department of Philosophy at the NUS Faculty of Arts and Social Sciences cast their eyes on what’s next in AI.

AI to Come Under More Government Oversight Amid Lingering Copyright Concerns
Amid the rapid pace of developments in AI technologies, a growing number of countries are addressing the need for AI regulation, notes NUS Law’s Prof Tan. Another issue for regulators relates to whether the content produced through these generative AI systems could constitute copyright infringement. Prof Tan, who is also Co-Director of NUS Law’s Centre for Technology, Robotics, Artificial Intelligence and the Law (TRAIL), also weighs in on the shake-up at OpenAI and what it could mean for the future direction of AI.

The use of AI will soon be regulated by the European Union based on their level of risk to human health, safety and fundamental rights. China has also introduced regulations called Deep Synthesis Provisions which came into force in January 2023. Presently in Singapore, the government has yet to consider an omnibus regulation of the use of AI. At a broader level, while the use of AI can bring immense benefits to society, there are multifarious risks such as job displacement, deepfakes and disinformation, invasion of privacy, social manipulation and weaponisation. I hope that the Singapore government will form a study group next year to discuss this issue. Time is of the essence. Surely, we can’t be falling too far behind what the European Union and China are doing in this area.

All the machinations behind the dismissal of Sam Altman, a job offer by Microsoft, and his subsequent reinstatement at OpenAI - the company behind ChatGPT - would have significant implications for all of humanity. Some believe that the direction that OpenAI may be taking in the future will not be the same as its original mission to develop AI that is safe for the world to use. Altman's purported vision of techno-omnipotence is unsettling.

Generative AI can equally inspire awe and concern. At a more parochial level, our lives may have been enriched by generative AI, and our secret dreams of being a writer or artist are now realised with the aid of ChatGPT or Stable Diffusion, but it does not mean that these benefits should not come at a price. Even before a legal framework for ethical and responsible use can be established, the companies behind ChatGPT and other generative AI applications are already facing lawsuits over the unauthorised use of copyrighted works for training data. Of immediate relevance are issues surrounding whether copyright adequately protects writers, photographers and artists whose works have been used in this manner. Copyright is at its core driven by an economic ethos. Copyright exists to reward original creative efforts, and any subsequent uses that draw on and reproduce creative content ought to pay an appropriate fee. Perhaps the best things in life are not for free.”

Greater Push for More ‘Responsible’ AI as Developers Seek to Improve Models
Concerns over the potential of AI to generate inaccurate or harmful content and the rising drumbeat of regulatory scrutiny could push more developers to work on improving their models, says NUS Computing’s Prof Hahn, who is also Deputy Director (AI Governance) at AI Singapore. He notes that aside from enhancing safety, developers will also be incentivised to improve their models to make their products more valuable to users.

The advent of foundational AI models and generative AI models like LLMs (large language models such as ChatGPT) have created tremendous excitement about how individuals and organisations can leverage technology to create value. That being said, these AI technologies are not perfect; they are still immature – we know that tools such as ChatGPT do not always provide factually accurate information and they can even provide harmful content, including discriminatory content, misinformation or fake news. 

In 2024, I expect that there will be a strong push by developers of these foundation models to improve the quality of AI systems with a special focus on safety, toward making AI "responsible". There is currently a lot of debate globally about AI regulation due to these safety concerns. On the one hand, regulations may be able to limit some of the safety issues, but how to effectively regulate AI is still up for debate. I believe that companies developing AI systems, such as OpenAI, will focus on making the technology safer since creating a better – and safer – system will make them more valuable.

At present, these foundational models were built and trained from almost all data that is publicly accessible on the internet, without too much consideration for the quality. I believe that companies will focus on creating better AI models by carefully curating the training dataset such that the trained models will perform better. 

Skills Gap Between Savvy and Non-savvy Generative AI Users to Widen
While ChatGPT’s arrival last year prompted concern over its ability to facilitate cheating and plagiarism within the education sector, educators now face the larger question of how to leverage generative AI tools to transform teaching and learning in a world where AI will increasingly become more prevalent. Mr Sim, who is also an Associate Fellow of the NUS Teaching Academy, says the rise of generative AI will widen the disparity between individuals skilled in exploiting its full capabilities and those less capable of doing so.

“ChatGPT’s debut in November 2022 has radically changed how students learn. It has become so ubiquitous that many students are now commenting, ‘I cannot imagine university life without ChatGPT.’

Yet, masterful usage does not come naturally. I’ve observed that a minority of users excel at using generative AI collaboratively, amplifying their capabilities as if they were working in a team. The vast majority, however, underutilise generative AI, unaware of its full capabilities. This disparity risks widening the performance gap between those who are adept at using generative AI and those who are not.

If we are committed to ensuring that our students are ready for the future, then we educators must first be ready for the future. We must learn how to master its collaborative potential to enhance our own abilities by embracing a change in mindset: first, to see it more as a consultative tool to enhance our work rather than as an answer generator; and second, to see generative AI’s output as drafts, extracting valuable “nuggets of gold” rather than accepting them wholesale. This approach is not intuitive to most users. It needs to be conditioned over repeated use. By adopting this mindset, we can pave the way for students to use generative AI responsibly and rigorously to enhance their learning.” 


This story first appeared in NUSNews on 4 December 2023.

Scroll to Top