On October 16, the New York State Department of Financial Services (NY DFS) issued an industry letter to entities regulated by NY DFS (covered entities) providing guidance addressing the cybersecurity risks associated with the use of artificial intelligence (AI). The guidance purportedly aims to assist covered entities in understanding and assessing cybersecurity risks associated with threats arising from the use of AI by cybercriminals and the controls that may be used to mitigate those risks. The NY DFS emphasizes that this new guidance does not impose any new requirements on covered entities, but rather it provides an outline for meeting existing compliance obligations under the NY DFS Cybersecurity Regulation, 23 NYCRR Part 500, in light of the advancements in AI technology.

According to the NY DFS, the most concerning cybersecurity risks associated with the use of AI include:

  • AI-Enabled Social Engineering. AI has significantly enhanced the capabilities of threat actors to conduct social engineering attacks. These attacks now include highly personalized and sophisticated content, such as deepfakes, which can convincingly mimic individuals to extract sensitive information or prompt unauthorized actions.
  • AI-Enhanced Cybersecurity Attacks. AI enables threat actors to amplify the scale and speed of cyberattacks. By quickly identifying and exploiting vulnerabilities, AI can facilitate the deployment of malware and the exfiltration of nonpublic personal information (NPI).
  • Exposure or Theft of Vast Amounts of NPI. AI systems often require large datasets, including NPI, increasing the risk of exposure or theft. Additionally, the use of biometric data for authentication introduces further vulnerabilities.
  • Increased Vulnerabilities Due to Third-Party Dependencies. The reliance on third-party service providers (TPSPs) for AI-powered tools introduces additional security vulnerabilities. Compromised TPSPs can become gateways for broader attacks on covered entities’ networks.

The 2017 NY DFS Cybersecurity Regulation requires covered entities to assess risks and implement minimum cybersecurity standards designed to mitigate cybersecurity threats relevant to their businesses. The NY DFS provided the following examples of controls and measures that, when used together, help combat AI-related risks.

  • Risk Assessments and Risk-Based Programs. Covered entities should conduct comprehensive risk assessments that include AI-related risks. Specifically, when designing risk assessments, covered entities should address AI-related risks in the following areas: the organization’s own use of AI, the AI technologies utilized by TPSPs and vendors, and any potential vulnerabilities stemming from AI applications that could pose a risk to the confidentiality, integrity, and availability of the covered entity’s Information Systems or NPI. These assessments should inform the development of cybersecurity programs, policies, and procedures tailored to mitigate identified risks.
  • Third-Party Service Provider and Vendor Management. Covered entities must implement robust TPSP policies, including due diligence and contractual protections, to address AI-related threats. TPSPs should be required to timely notify covered entities of any cybersecurity events impacting their systems or NPI.
  • Access Controls. Implementing multi-factor authentication (MFA) and other access controls is crucial to prevent unauthorized access. Especially since, as of November 2025, the 2017 Cybersecurity Regulation will require MFA to be in place for all authorized users attempting to access covered entities’ information systems or NPI. While MFA requires authorized users to authenticate their identities using at least two of three authentication factors, covered entities have the flexibility to decide, based on their risk assessments, which authentication factors to use. Covered entities should consider using authentication factors that can withstand AI-manipulated deepfakes and other AI-enhanced attacks by avoiding authentication via SMS text, voice, or video, and using forms of authentication that AI deepfakes cannot impersonate, such as digital-based certificates and physical security keys.
  • Cybersecurity Training. Regular training for all personnel, including senior executives, is essential. The 2017 Cybersecurity Regulation has always required cybersecurity training for all personnel; covered entities must now provide at least annual cybersecurity awareness training that includes social engineering, including on deepfake attacks. Training should cover procedures for what to do when personnel receive unusual requests such as a request for credentials, an urgent money transfer, or access to NPI.
  • Monitoring. Continuous monitoring of information systems is necessary to detect unauthorized access and new security vulnerabilities. Covered entities that allow personnel to use AI applications such as ChatGPT, should also consider monitoring for unusual query behaviors that might indicate an attempt to extract NPI and blocking queries from personnel that might expose NPI to a public AI product or system.
  • Data Management. Effective data management practices, including data minimization and maintaining updated data inventories, can limit the impact of data breaches. Entities should also secure AI systems and the vast amounts of data they process.

As AI continues to evolve, so too will the associated cybersecurity risks. Covered entities must regularly review and update their cybersecurity programs to address these dynamic threats. By integrating AI into their cybersecurity strategies, entities can enhance their ability to detect, respond to, and recover from cyberattacks. This is a critical first step in a company’s AI journey; however, forward-looking companies are quickly moving beyond these risk-reduction strategies and simultaneously taking proactive steps to ready themselves for the AI revolution while enhancing cyber controls and governance.

Looking Beyond Today — Three Critical Trends to Plan for Now. Prioritizing AI-readiness at strategic companies often includes longer-ranging, impactful projects to anticipate hurdles that legal departments can overcome, including:

  • Trend #1 — Moving to a World of Curated Databases (versus Public “Chat-GPT” Models Based on the Internet). Preparing to move from using large language models based on public information (like Chat-GPT) to implementing purpose-built solutions of today and the not-too-far-off future for business will be based on “Curated Databases,” not information available on the internet. This migration will have large language models and other AI platforms that are trained and/or can access proprietary databases that are either first-party data (i.e., your own SharePoint, document retention, contract management and other internal documents) or licensed data (i.e., LexisNexis, Westlaw, Bloomberg, Wall Street Journal, New York Times). A critical first step for success and speed to market will be for legal departments to engage with businesses to proactively identify potential AI use cases and match them to available internal and licensed sources of data as well as the appropriate/approved AI tool.
  • Trend #2 — Data Is the New Oil . . . Companies and Law/Consulting Firms Will Compete Based on the Quality of their Data. Going forward, as everyone will have publicly available information ingested into their databases, law firms, consulting firms, businesses and other commercial ventures will often be competing on the quality and cleanliness/accuracy of their proprietary data being used on increasingly available new AI tools and platforms. Once the data sources are identified, a ready-made data cleansing strategy is a key accelerator. This allows companies to remove redundant, unreliable, obsolete, and trivial data from targeted data sources. Deploying a defined combination of data discovery, data classification, data purging, and security access tools, can ready a company for NY DFS compliance as well as generally to move from ideation to production more quickly. Negotiating licenses, validating privacy and security safeguards, and governing records management issues are just a few ways legal departments can add value to this process.
  • Trend #3 — New Generation of AI Assessments Will Be Born. Finally, getting to “go” will require a new set of governance steps in the form of AI assessments to validate accuracy, fairness, bias, and safety. Developing a prototype assessment ahead of time can greatly reduce time-to-production and reduce business frustration. Legal departments can often assist with leveraging existing assessment processes including, privacy assessments, security assessments, third-party assessments, and risk assessments to evolve a next generation process that accounts for new AI assessment requirements. Yet, while legal departments and firms have been helping companies on the front-end evaluate new use cases through “impact assessments” or similar documentation, the future trend to be cognizant of will be having law firms conduct under privilege AI Fairness Assessments (similar to Fair Lending assessments conducted in financial services), Model Drift and Accuracy Assessments, and Compliance Assessments related to privacy, security, data use/intellectual property, international data transfers and more. While many companies focus on what may lose in the AI revolution, there will be many new opportunities where legal departments and law firms will play a critical role in defining and directing traffic on the AI road ahead.

To learn more about the impact on your company and product pipeline, please contact Kim Phan, Jim Koenig, Joel Lutz or any member of our Privacy + Cyber or Consumer Financial Services teams.