Alex Sul/Getty Images
Singapore made a number of cybersecurity announcements this week, including guidelines for the safety of artificial intelligence (AI) systems, safety labels for medical devices, and new laws banning deepfakes in election advertising content.
New guidelines and companion guide for securing AI systems aim to promote a secure by-design approach to help organizations reduce potential risks in the development and deployment of AI systems.
And can AI and automation adequately manage the growing threats to the cybersecurity environment?
Singapore’s Cyber Security Authority (CSA) said that “AI systems can be vulnerable to adversarial attacks in which malicious actors intentionally manipulate or mislead AI systems.” . “The introduction of AI may also exacerbate existing cybersecurity risks to enterprise systems. [which] It can pose risks such as data breaches or lead to harmful or undesirable model results. ”
“As such, like any software system, AI must be secure by design and secure by default,” the agency said.
Related article: AI concerns concern 90% of consumers and businesses – see what they’re most worried about
The guidelines identify potential threats such as supply chain attacks and risks such as adversarial machine learning. It has been developed with reference to established international standards and includes principles to help practitioners implement security controls and best practices to protect AI systems.
The guidelines cover five stages of the AI lifecycle, including development, operations and maintenance, and end-of-life, with the latter highlighting how data and AI model artifacts should be disposed of.
Also: As more cybersecurity professionals lose control over detection tools, they are turning to AI.
In developing the companion guide, CSA said it worked with AI and cybersecurity experts to provide a “community-driven resource” that provides “practical” countermeasures and controls. This guide will also be updated as the AI security market evolves.
It consists of case studies such as patch attacks on image recognition surveillance systems.
However, because this control primarily addresses cybersecurity risks to AI systems, this guide does not address AI safety or other related components such as transparency and fairness. However, some of the recommended measures may overlap, the CSA said, adding that the guide does not cover the misuse of AI in cyberattacks, such as AI-powered malware and fraud such as deepfakes. Ta.
Also: Cybersecurity teams need new skills even as they struggle to manage legacy systems
However, Singapore has passed a new law banning the use of deepfakes and other digitally generated or manipulated online election advertising content.
Although such content depicts candidates saying or doing things they did not say or do, it is difficult for the public to “reasonably believe” that the manipulated content is genuine. It is “real enough” to be believable.
Deepfakes will be banned from election campaigns
“The (Online Advertising Electoral Integrity) (Amendment) Bill, passed through second reading in Parliament, will allow content generated using AI, including generative AI (Gen AI), as well as non-AI content such as splicing, to It also supports tools,” the digital minister said. Development and information Josephine Theo.
“This bill addresses the most harmful types of election-related content: content that misleads or deceives the public through misrepresentations of a candidate’s words or actions that are reasonably believable. It is aimed at addressing content “by some members of the public,” Mr Teo said. “Realistic conditions are evaluated objectively. There is no one-size-fits-all standard, but there are some general points.”
Also: One-third of all generative AI projects will be abandoned, says Gartner
These include “closely matched” content.[es]”These are known traits, expressions, and mannerisms of the candidate. The content may also use real people, events, and locations, making it more believable,” she added. .
While most members of the public may find it hard to believe the Prime Minister’s investment advice on social media, some people may still fall prey to these AI-based scams, he said. pointed out. “In this regard, this law will apply as long as there is a reasonable belief in the public that the candidate actually said or did what is portrayed,” she said.
Related article: Cyber defense in the spotlight as elections enter the era of generative AI
For content to be prohibited under the new law, four elements must be met: Online election ads are either digitally generated or manipulated, depict things said or done by candidates that they did not actually do, or are realistic enough for some people to believe they are. Be legal in public.
Mr Teo said the bill would not outlaw the “reasonable” use of AI and other technologies in election campaigns, such as memes, AI-generated or animated characters, and cartoons. It also does not apply to “benign cosmetic changes,” which range from using beauty filters or adjusting lighting in videos.
And do you think AI can solve all your business problems? Apple’s new study shows otherwise
The minister also noted that the bill does not cover private or household communications or content shared between individuals or in private group chats.
“However, we know that false content can circulate rapidly on open WhatsApp and Telegram channels,” she said. “If prohibited content is reported to be being communicated in a large group chat involving many users who do not know each other and is freely accessible to the public, such communication would be subject to capture under the bill. , we will assess whether action should be taken.” ”
Related article: Google announces $3 billion investment to tap AI demand in Malaysia and Thailand
The law also does not apply to news published by licensed news organizations or to members of the public who “inadvertently” reshare messages or links without realizing that the content has been manipulated, he added. .
Mr Teo explained that the Singapore government will use a variety of detection tools to assess whether content has been generated or manipulated using digital means. These include commercial tools, in-house tools, and tools developed with researchers such as the Online Safety Advanced Technology Center, she said.
Related article: OpenAI opens new office in Singapore to support rapid growth in the region
In Singapore, remedial instructions will be issued to parties, including social media services, to remove or disable access to prohibited online election advertising content.
Social media service providers who fail to comply with the remedial instructions could be fined up to S$1 million. All other parties, including individuals, who fail to comply with remedial instructions may be subject to a fine of up to S$1,000 and/or imprisonment of up to one year.
Related article: Sony Research’s AI division helps AI Singapore develop large-scale language models
“There has been a notable increase in deepfake incidents in countries where elections have been held or are planned,” Teo said, adding that deepfake incidents have tripled in India and more than 16 times in South Korea. It cited a supposed Sumsub study. Compared to a year ago.
“Misinformation generated by AI can seriously threaten the foundations of our democracy and requires an equally serious response,” she said. The new bill will ensure the “authenticity of candidate representation” and the integrity of Singapore’s elections, it added.
Is this medical device adequately protected?
Singapore is also looking to help users procure properly secured medical equipment. CSA on Wednesday launched a cybersecurity labeling scheme for such devices, expanding the program to cover consumer Internet of Things (IoT) products.
This new initiative has been developed in collaboration with the Department of Health, the Department of Health Sciences and the national health technology agency Synapxe.
Related article: Singapore seeks ‘practical’ medical breakthroughs with new AI research center
The label is intended to indicate the security level of a medical device and help medical users make informed purchasing decisions, CSA said. The program applies to devices that process personally identifiable information or clinical data and have the ability to collect, store, process, or transmit data. It also applies to medical devices that can connect to other systems and services and communicate via wired or wireless communication protocols.
Products are rated based on four rating levels. Level 1 medical devices must meet baseline cybersecurity requirements, Level 4 systems must have enhanced cybersecurity requirements, and independent third-party software binary analysis and security assessment. must also pass.
Also: These medical IoT devices pose the greatest security risks
The announcement follows a nine-month sandbox phase that ended in July 2024, during which 47 applications from 19 participating medical device manufacturers conducted various tests of their products. These include in vitro diagnostic analyzers, software binary analysis, penetration testing, and security assessments.
Feedback collected during the sandbox stage was used to fine-tune the scheme’s operational processes and requirements, including making the application process and assessment methods more clear.
Also: Ask a medical question through MyChart? Your doctor might have an AI answer it
Although the labeling program is voluntary, CSA urges the need to take “aggressive steps” to protect against growing cyber risks, especially as medical devices increasingly connect to hospital and home networks. are.
Medical devices in Singapore must now be registered with the HSA and are subject to regulatory requirements, including cybersecurity, before they can be imported into the country and made available.
AI is also reducing therapist burnout. How mental health changes
In a separate announcement, CSA said its consumer device cybersecurity labeling scheme is now recognized in South Korea.
The bilateral agreement was signed with the Korea Internet Security Agency (KISA) and the German Federal Office for Information Security (BSI) on the sidelines of the Singapore International Cyber Week 2024 conference to be held this week.
The South Korean agreement, scheduled to take effect from January 1 next year, will see KISA’s IoT cybersecurity certification and Singapore’s cybersecurity label mutually recognized by both countries. This is the first time the Asia-Pacific market has joined such an agreement, with Singapore also having agreements with Finland and Germany.
Also: Connecting generative AI to medical data improves its usefulness for doctors
The Korean certification system includes three levels: Lite, Basic, and Standard, and all levels require third-party lab testing. Devices issued at the Basic level are considered to have achieved the Level 3 requirements of the Singapore Labeling System, which has four assessment levels. KISA will also recognize Singapore’s Level 3 products as meeting Basic Level certification.
This label applies to consumer smart devices such as home automation, alarm systems, and IoT gateways.