The New York State Department of Financial Services (DFS) has issued new guidance to help DFS-regulated entities address and address cybersecurity risks arising from artificial intelligence (AI).
“AI is improving the ability of businesses to enhance their threat detection and incident response strategies, while also creating new opportunities for cybercriminals to carry out crimes at greater scale and speed,” said Adrian, DFS Superintendent. A. Harris said in a report on Wednesday (October 16). release. “As AI-enabled tools become more prolific, New York City needs security to protect critical data while ensuring the flexibility needed to address diverse risk profiles in an ever-changing digital environment. We continue to ensure that our standards remain rigorous.”
The guidance does not impose new requirements, according to the release. Instead, it helps DFS-regulated institutions meet their existing obligations under cybersecurity regulations.
Under these regulations, agencies must assess and address cybersecurity risks, including those arising from AI, and implement multiple layers of security controls with redundant protection, ensuring that if one control fails, other controls will The release states that this will enable the company to defend against cybersecurity attacks.
Controls and countermeasures to mitigate AI-related threats include risk assessments and risk-based programs, policies, procedures, and plans. Management of Third Party Service Providers and Vendors. access control. Cyber security training. Monitoring process to detect new security vulnerabilities. Manage your data according to our guidance.
According to the release, the guidance notes that AI-specific security risks include social engineering, enhanced cyberattacks, theft of non-public information, and increased vulnerabilities due to supply chain dependencies.
“As AI continues to evolve, AI-related cybersecurity risks will grow as well,” the guidance states. “Detecting and responding to AI threats will require similarly sophisticated measures, which is why covered entities should regularly review and reassess their cybersecurity programs and controls as required by Part 500. It’s important.”
According to PYMNTS Intelligence’s joint study with Mastercard Company Brighterion, “AI in Focus: Waging Digital Warfare Against Payments Fraud,” 93% of acquirers who use AI to detect fraud reported that they had committed fraud in the previous year. They answered that it had increased.
The report also found that 60% of acquiring banks say AI systems are their most important fraud detection tool, and 75% of acquirers use AI to detect transaction fraud. Ta.
See more: AI, AI Risk, Artificial Intelligence, Cybersecurity, New York State Department of Financial Services, News, PYMNTS News, Regulation, Risk Assessment, Risk Management, Security, Latest News
Source link