Key changes:
- Updated: "Further, technology and security vulnerabilities of acquisitions, business partners or third-party providers may not be identified during due diligence or soon enough to mitigate potential risks."
- Updated: "Such cyber-attacks are of ever-increasing levels of sophistication, including the use of adversarial AI techniques (for example, AI may be used to automate phishing attacks and other forms of social engineering, and accelerate vulnerability exploitation), and are made by groups and individuals with a wide range of motives (including, but not limited to, industrial espionage, extortion, property destruction and personal information theft) and expertise, including, but not limited to, organized criminal groups, “hacktivists,” nation states, employees, business partners and others."
- Updated: "Such incidents could require disclosure to government authorities and/or regulators and could require notification to affected individuals and any incident could result in financial, legal, business and reputational harm to us."
- Updated: "AI, including machine learning and generative AI, is increasingly being used in the biopharmaceutical and global healthcare industries."
- Updated: "For example, existing regulatory frameworks governing the use, development, and deployment of AI, including regulatory guidance on the use of AI in medical products, remain uncertain and subject to significant change."
Current (2026):
Significant disruptions of IT systems or breaches of information security could adversely affect our business. We extensively rely upon sophisticated IT systems (including cloud services) to operate our business. We produce, collect, process, store and transmit large amounts of…
Read full text
Significant disruptions of IT systems or breaches of information security could adversely affect our business. We extensively rely upon sophisticated IT systems (including cloud services) to operate our business. We produce, collect, process, store and transmit large amounts of confidential information (including personal information and intellectual property), and we deploy and operate an array of technical and procedural controls to maintain the confidentiality, integrity and availability of such confidential information. We develop and operate digital systems to engage patients, healthcare providers, governments, payors and supply chain partners to conduct business and deliver medicines, digital diagnostics, clinical trials and digital therapies. Such systems include mobile applications, wearable devices, internet websites and other digital technologies that may be targets of attack. We have outsourced significant elements of our operations, including significant elements of our IT infrastructure and, as a result, we manage relationships with many third-party providers who may or could have access to our confidential information. We rely on technology developed, supplied and/or maintained by third-parties that may make us vulnerable to “supply chain” style cyber-attacks. Further, technology and security vulnerabilities of acquisitions, business partners or third-party providers may not be identified during due diligence or soon enough to mitigate potential risks. The size and complexity of our IT and information security systems, and those of our third-party providers (and the large amounts of confidential information present on them), make such systems potentially vulnerable to service interruptions or to security breaches from inadvertent or intentional actions by, but not limited to, our employees, contingent workers, service providers, business partners, customers or malicious attackers. As a global pharmaceutical company, our systems and assets are the target of frequent cyber-attacks. Such cyber-attacks are of ever-increasing levels of sophistication, including the use of adversarial AI techniques (for example, AI may be used to automate phishing attacks and other forms of social engineering, and accelerate vulnerability exploitation), and are made by groups and individuals with a wide range of motives (including, but not limited to, industrial espionage, extortion, property destruction and personal information theft) and expertise, including, but not limited to, organized criminal groups, “hacktivists,” nation states, employees, business partners and others. Due to the sophistication of some of these attacks, there is a risk that they may remain undetected for a period of time. While we have invested in the protection of data and IT and develop and maintain systems and controls, our efforts, like those of other similar companies, have not always prevented and may not in the future prevent service interruptions, extortion, theft of confidential, personal or proprietary information, compromise of data integrity or unauthorized information disclosure. Any technology service interruption or breach of our systems could adversely affect our business operations and/or result in potential legal liability, the loss of personal data, confidential information or intellectual property. Such incidents could require disclosure to government authorities and/or regulators and could require notification to affected individuals and any incident could result in financial, legal, business and reputational harm to us. We maintain cyber liability insurance; however, this insurance may not be sufficient to cover the financial, legal, business or reputational losses that may result from an interruption or breach of our systems. AI, including machine learning and generative AI, is increasingly being used in the biopharmaceutical and global healthcare industries. We have begun deploying AI in various parts of our internal and external operations, including in R&D, and continue to explore further use cases for AI, and our investments in AI may not yield anticipated benefits. As with many developing technologies, AI presents risks and challenges. For example, existing regulatory frameworks governing the use, development, and deployment of AI, including regulatory guidance on the use of AI in medical products, remain uncertain and subject to significant change. New regulatory requirements could impose significant compliance costs, limit our ability to deploy AI, require changes to our practices (including documentation, risk management, testing, or transparency measures), or increase litigation and enforcement risk. In addition, AI may be used by payors to limit access to care or to deny prior authorization requests, which could impact Pfizer’s business results. Further, AI technology itself can give rise to risks. Generative AI output is probabilistic in nature and may not be reproducible or generate consistent results over time. AI can generate outputs that are false, misleading, incomplete, or inconsistent, and may be difficult to monitor, explain, or reproduce. AI performance may also degrade over time due to changes in inputs, data drift, updates by vendors, or adversarial manipulation. AI design or training may be flawed, including as a result of external vendors or others training AI models on content without the necessary intellectual property rights or other legal rights or permissions or using data sets that may not be appropriate for the intended use, of poor quality, contain biased information, or that become corrupted during a cyber-attack; and flawed or inappropriate data practices by data scientists, engineers, and end-users could impair results. If the output that AI produces or assists in producing is deficient or inaccurate, we could be subjected to competitive harm, regulatory scrutiny, potential legal liability and brand or reputational harm. The use of AI Pfizer Inc.2025 Form 10-K23 Pfizer Inc.2025 Form 10-K23 Pfizer Inc.2025 Form 10-K23 2025 Form 10-K 23 may also lead to the unauthorized release of confidential or proprietary information which may impact our ability to realize the benefits of our data, including intellectual property. Further, reliance on third-party AI tools or services that incorporate AI may expose our organization to compliance gaps that are outside of our control. In addition, we could face risks related to our reliance on a small number of AI models or service providers.
View prior text (2025)
Significant disruptions of IT systems or breaches of information security could adversely affect our business. We extensively rely upon sophisticated IT systems (including cloud services) to operate our business. We produce, collect, process, store and transmit large amounts of confidential information (including personal information and intellectual property), and we deploy and operate an array of technical and procedural controls to maintain the confidentiality, integrity and availability of such confidential information. We develop and operate digital systems to engage patients, healthcare providers, governments, payors and supply chain partners to conduct business and deliver medicines, digital diagnostics, clinical trials and digital therapies. Such systems include mobile applications, wearable devices, internet websites and other digital technologies that may be targets of attack. We have outsourced significant elements of our operations, including significant elements of our IT infrastructure and, as a result, we manage relationships with many third-party providers who may or could have access to our confidential information. We rely on technology developed, supplied and/or maintained by third-parties that may make us vulnerable to “supply chain” style cyber-attacks. Further, technology and security vulnerabilities of acquisitions, business partners or third-party providers may not be identified during due diligence or soon enough to mitigate exploitation. The size and complexity of our IT and information security systems, and those of our third-party providers (and the large amounts of confidential information that is present on them), make such systems potentially vulnerable to service interruptions or to security breaches from inadvertent or intentional actions by, but not limited to, our employees, contingent workers, service providers, business partners, customers or malicious attackers. As a global pharmaceutical company, our systems and assets are the target of frequent cyber-attacks. Such cyber-attacks are of ever-increasing levels of sophistication, including the use of adversarial AI techniques, and are made by groups and individuals with a wide range of motives (including, but not limited to, industrial espionage, extortion, property destruction and personal information theft) and expertise, including, but not limited to, organized criminal groups, “hacktivists,” nation states, employees, business partners and others. Due to the nature of some of these attacks, there is a risk that they may remain undetected for a period of time. While we have invested in the protection of data and IT and develop and maintain systems and controls, our efforts, like those of other similar companies, have not always and may not in the future prevent service interruptions, extortion, theft of confidential, personal or proprietary information, compromise of data integrity or unauthorized information disclosure. Any technology service interruption or breach of our systems could adversely affect our business operations and/or result in potential legal liability, the loss of personal data, confidential information or intellectual property. Such incidents could require disclosure to government authorities and/or regulators and could require notification to impacted individuals and any incident could result in financial, legal, business and reputational harm to us. We maintain cyber liability insurance; however, this insurance may not be sufficient to cover the financial, legal, business or reputational losses that may result from an interruption or breach of our systems. AI is increasingly being used in the biopharmaceutical and global healthcare industries. As with many developing technologies, AI presents risks and challenges. For example, algorithms may be flawed or trained on content without the necessary intellectual property rights or other legal rights or permissions; data sets may not be appropriate for the intended use, of poor quality, contain biased information, or become corrupted during a cyber-attack; and inappropriate or controversial data practices by data scientists, engineers, and end-users could impair results. If the outputs that AI produces or assists in producing are deficient or inaccurate, we could be subjected to competitive harm, potential legal liability and brand or reputational harm. Furthermore, use of AI may lead to the release of confidential information which may impact our ability to realize the benefits of our data, including intellectual property.