The programme comprises of 4 discussion driven workshops at 3 hours per module over 4 days.
The Evolution of Artificial Intelligence
The evolution of Artificial Intelligence (AI) traces humanity’s long-standing ambition to create machines that can think, learn and assist in decision-making. Initial progress focused on rule-based programs and symbolic reasoning, laying the foundation for modern AI research. As computing power increased, AI evolved from simple, hand-coded rules to systems capable of learning from data. In recent years, advances in big data, cloud computing, and deep learning have accelerated AI’s growth, leading to widespread applications in education, healthcare, transportation, creative industries and everyday digital tools.
Human-centered AI Innovation
Human-centered AI innovation goes beyond the applications of AI, which focuses on creating AI solutions that are grounded in real human needs and experiences. By applying the design thinking framework — empathise, define, ideate, prototype and test — this approach ensures that AI is developed not just for efficiency, but for meaningful and responsible impact. By integrating human-centered design with AI innovation, it empowers learners to create solutions that are inclusive, sustainable and trustworthy. It highlights the role of human judgment and leadership in shaping AI technologies that enhance lives, support communities and drive positive change.
AI and Public Policy
This module examines how AI is shaping public policy agendas and how governments can use AI to improve public services while safeguarding public interest values. Learners will gain insights on how public policy is developed, how evidence and stakeholders influence policy decisions, and how different policy instruments (regulation, standards, procurement, and public communication) can guide responsible AI adoption. Case discussions and group exercise will expose learners to the process making policy recommendations for AI-enabled initiatives.
AI Risks and Governance
This module focuses on the risks AI can pose—technical, ethical, legal, and societal—and the governance mechanisms used to manage them across the AI lifecycle (design, development, deployment, and monitoring). Learners will be introduced to practical frameworks for risk assessment and mitigation (e.g., transparency, human oversight, audits, and incident response) and explore how organisations and governments build trustworthy AI systems. Through group exercise, learners will incorporate risk, ethics, and governance into their policy recommendation for AI-enabled initiatives.
💡 We Make Sure You Internalise
The programme will culminate with small group presentations at the end of the programme on last session. Preparation and guidance activities will be scaffolded to ensure that students are well-prepared for their final presentation.
(Please note that LKYSPP EE is only in charge of this academic segment comprising workshops and assessment. Other activities are not organised and handled by LKYSPP EE.)