high match confidence
Sentence-level differences:
- Reworded sentence: "Our current operations, products, and services use AI technologies, including proprietary machine learning and AI algorithms and models."
- Reworded sentence: "Our ability to continue to maintain or use such technologies may be dependent on access to specific third-party software and infrastructure, such as processing hardware, and we cannot control the availability or pricing of such third-party software and infrastructure, especially in a highly competitive environment."
- Reworded sentence: "Refer to our risk factor titled “If we are unable to fully protect and successfully defend our intellectual property from use by third parties, our ability to compete may be harmed” for additional risks related to intellectual property."
- Reworded sentence: "For example, the EU Artificial Intelligence Act (“EU AI Act”), entered into force on August 1, 2024, establishes a comprehensive, risk-based governance framework for AI in the EU market."
- Reworded sentence: "The cost to comply with such laws, regulations, decisions, and/or guidance interpreting existing laws, or to adjust our business plans based on changes to how such laws are enforced, could be significant and would increase our operating expenses (such as by imposing additional reporting obligations regarding our use of AI technologies)."
Current (2026):
Our current operations, products, and services use AI technologies, including proprietary machine learning and AI algorithms and models. Examples of our current uses of AI and machine learning include (i) using algorithms to process video and machine data to identify surgical…
Read full text
Our current operations, products, and services use AI technologies, including proprietary machine learning and AI algorithms and models. Examples of our current uses of AI and machine learning include (i) using algorithms to process video and machine data to identify surgical activities and surgical indicators to support learning, teaching, and practice management, and (ii) using algorithms to support surgical planning and navigation. Future innovations in our products and services will likely continue to incorporate AI, and these applications may become important in our operations over time, for example, our development of machine learning-enabled medical devices (“MLMDs”). As with many technological innovations, there are significant risks and challenges involved in maintaining and deploying these technologies, and there can be no assurance that the usage of such technologies will enhance our products or services or be beneficial to our business, including our efficiency or profitability. Our ability to continue to maintain or use such technologies may be dependent on access to specific third-party software and infrastructure, such as processing hardware, and we cannot control the availability or pricing of such third-party software and infrastructure, especially in a highly competitive environment. Our products and services may not compete effectively with alternative products and services if we are not able to source and integrate the latest technologies into our products and services. In addition, several aspects of intellectual property protection in the field of AI are currently under development, and there is uncertainty and ongoing litigation in different jurisdictions as to the degree and extent of protection warranted for AI technologies and relevant system input and outputs. If we fail to obtain protection for the intellectual property rights concerning our AI technologies, or later have our intellectual property rights invalidated or otherwise diminished, our competitors may be able to take advantage of our research and development efforts to develop competing products, which could adversely affect our business, reputation, financial condition, or results of operations. Refer to our risk factor titled “If we are unable to fully protect and successfully defend our intellectual property from use by third parties, our ability to compete may be harmed” for additional risks related to intellectual property. 37 37 37 Table of Contents Table of Contents The regulatory landscape surrounding AI is also evolving, and the use of machine learning technologies may expose us to an increased risk of regulatory enforcement and litigation. As the FDA and other regulatory authorities continue to develop and incorporate such principles into their regulation of MLMDs, it is possible that medical products using AI and machine learning will become subject to significant additional oversight, including with respect to premarket review, modification, monitoring, maintenance, and device performance. In the U.S., legislation related to AI technologies has been introduced at the federal level and has been enacted by various states. At the federal level, in January 2025, the Trump administration rescinded an executive order relating to the safe and secure development of AI technologies that was previously implemented by the Biden administration. The Trump administration then issued a new executive order that, among other things, requires certain agencies to develop and submit to the president action plans to “sustain and enhance America’s global AI dominance,” and to specifically review and, if possible, rescind rulemaking taken pursuant to the rescinded Biden executive order. Additionally, in December 2025, the Trump administration’s “Ensuring a National Policy Framework for Artificial Intelligence” Executive Order was signed. This order calls for federal standards and legislation that would preempt conflicting state AI regulations and create a federal litigation task force focused on challenging state AI laws in court. The Trump administration may continue to rescind other existing federal orders and/or administrative policies relating to AI technologies or may implement new executive orders and/or other rule making relating to AI technologies in the future. U.S. states continue to advance a patchwork of AI regulatory frameworks, including general requirements around transparency, risk-management, and accountability for AI technologies. Several states—such as Colorado, California, and Connecticut—have enacted or proposed laws governing high-risk AI uses, including rules that address algorithmic discrimination, impact assessments, and consumer disclosures. A growing subset of these efforts specifically target health-related AI, with states like Illinois and New York adopting provisions that regulate AI used in clinical decision-support, diagnostics, and other health-related functions, often requiring heightened testing, documentation, or oversight. Such additional regulations, and uncertainty around their enforceability, may impact our ability to develop, use, and commercialize AI technologies in the future. Apart from the U.S., policymakers in key jurisdictions, such as the EU, are actively working on legislation and regulations to encourage the development and use of ethical and safe AI technologies. For example, the EU Artificial Intelligence Act (“EU AI Act”), entered into force on August 1, 2024, establishes a comprehensive, risk-based governance framework for AI in the EU market. The majority of the substantive requirements of the EU AI Act will apply from August 2, 2026. The EU AI Act applies to companies that develop, use, and/or provide AI in the EU and, depending on the AI use case, includes requirements around transparency, conformity assessments and monitoring, risk assessments, human oversight, security, accuracy, general purpose AI, and foundation models, and fines for breach of up to 7% of worldwide annual turnover. In addition, the revised EU Product Liability Directive came into force in December 2024, to be implemented into EU member state national law by December 2026. This Directive extends the EU’s existing strict product liability regime to AI technologies and AI-enabled products, and facilitates civil claims in respect of harm caused by AI. Once fully applicable, the EU AI Act and the revised EU Product Liability Directives will have a material impact on the way AI is regulated in the EU. Further, in Europe, we are subject to the General Data Protection Regulation (“GDPR”), which regulates our use of personal data for automated decision making that results in a legal or similarly significant effect on an individual and provides rights to individuals in respect of that automated decision making. Recent case law from the CJEU has taken an expansive view of the scope of the GDPR’s requirements around automated decision-making and introduced uncertainty in the interpretation of these rules. The EU AI Act and developing interpretation and application of the GDPR in respect of automated decision-making, together with developing guidance and/or decisions in this area, may affect our use of AI technologies and our ability to provide, improve, or commercialize our business, require additional compliance measures and changes to our operations and processes, result in increased compliance costs and potential increases in civil claims against us, and could adversely affect our business, financial condition, or results of operations. Other jurisdictions where we operate have already or are also expected to introduce guidelines and regulations around the use of AI within the next few years. The cost to comply with such laws, regulations, decisions, and/or guidance interpreting existing laws, or to adjust our business plans based on changes to how such laws are enforced, could be significant and would increase our operating expenses (such as by imposing additional reporting obligations regarding our use of AI technologies). Such an increase in operating expenses, as well as any actual or perceived failure to comply with such laws and regulations, could adversely affect our business, financial condition, and results of operations. A breach or failure in our security measures could occur from a variety of circumstances and events, including third-party action, employee negligence or error, malfeasance, computer viruses, cyberattacks, or ransom-related attacks by computer hackers, failures during the process of upgrading or replacing software and databases, power outages, hardware failures, telecommunication failures, user errors, or catastrophic events, and any of the foregoing events could have a material adverse effect on our business, financial condition, or results of operations. For more information on risks associated with the processing of confidential and sensitive information, including personal information, refer to our risk factor titled “Information technology system failures, cyberattacks, or deficiencies in our cybersecurity could harm our business, customer relations, financial condition, or results of operations.” 38 38 38 Table of Contents Table of Contents Though we have taken steps to be thoughtful in our development, training, implementation, and use of AI and machine learning technologies, including taking steps to comply with the laws and frameworks discussed above that are currently in effect, our AI and machine learning-related processing could pose certain risks to our customers, including patients, clinicians, and healthcare institutions, and it is not guaranteed that regulators will agree with our approach to limiting these risks or to our compliance more generally. Risks can include, but are not limited to, the potential for errors or inaccuracies in the algorithms or models used by the MLMDs, the potential for bias or inaccuracies in the data used to train the MLMDs, the potential for improper processing of personal information that could lead to deprecation of our algorithms, and the potential for cybersecurity breaches that could compromise patient data or device functionality. Such risks could negatively affect the performance of our products, services, and business, as well as our reputation and the reputations of our customers, and we could incur liability through the violation of laws or contracts to which we are a party or civil claims.
View prior text (2025)
Our current operations, products, and services use AI technologies, including machine learning. Examples of our current uses of machine learning include (i) using algorithms to process video and machine data to identify surgical activities and surgical indicators to support learning, teaching, and practice management, and (ii) using algorithms to support surgical planning and navigation. Future innovations in our products and services will likely continue to incorporate AI, and these applications may become important in our operations over time, for example, our development of machine learning-enabled medical devices (“MLMDs”). As with many technological innovations, there are significant risks and challenges involved in maintaining and deploying these technologies, and there can be no assurance that the usage of such technologies will enhance our products or services or be beneficial to our business, including our efficiency or profitability. 40 40 40 Table of Contents Table of Contents Our ability to continue to maintain or use such technologies may be dependent on access to specific third-party software and infrastructure, such as processing hardware, and we cannot control the availability or pricing of such third-party software and infrastructure, especially in a highly competitive environment. Our products and services may not compete effectively with alternative products and services if we are not able to source and integrate the latest technologies into our products and services. In addition, several aspects of intellectual property protection in the field of AI are currently under development, and there is uncertainty and ongoing litigation in different jurisdictions as to the degree and extent of protection warranted for AI technologies and relevant system input and outputs. If we fail to obtain protection for the intellectual property rights concerning our AI technologies, or later have our intellectual property rights invalidated or otherwise diminished, our competitors may be able to take advantage of our research and development efforts to develop competing products, which could adversely affect our business, reputation, financial condition, or results of operations. Refer to our risk factor titled “If we are unable to fully protect and successfully defend our intellectual property from use by third parties, our ability to compete in the market may be harmed” for additional risks related to intellectual property. The regulatory landscape surrounding AI is also evolving, and the use of machine learning technologies may expose us to an increased risk of regulatory enforcement and litigation. As the FDA and other regulatory authorities continue to develop and incorporate such principles into their regulation of machine learning medical devices, it is possible that medical products using AI and machine learning will become subject to significant additional oversight, including with respect to premarket review, modification, monitoring, maintenance, and device performance. In the U.S., an executive order was issued in October 2023 on the Safe, Secure and Trustworthy Development and Use of AI, emphasizing the need for transparency, accountability, and fairness in the development and use of AI, including in the healthcare industry. The order seeks to balance fostering innovation with addressing risks associated with AI by providing eight guiding principles and priorities, such as ensuring that consumers are protected from fraud, discrimination, and privacy risks related to AI. The order also calls for future regulations from various agencies, such as the Department of Commerce (to draft guidance for detecting and authenticating AI content) and the Federal Trade Commission (to ensure fair competition and reduce consumer harm). In alignment with the order, other agencies have published guidance. Agencies such as the Department of Commerce and the Federal Trade Commission have also issued proposed rules governing the use and development of AI technologies. Further, legislation related to AI technologies has been introduced at the federal level and is advancing at the state level. For example, on March 13, 2024, Utah passed the Utah AI Policy Act, which took effect in May 2024, imposing certain disclosure requirements on the use of AI and, on May 17, 2024, Colorado enacted the Colorado AI Act, which will take effect in February 2026. Further, the California Privacy Protection Agency is currently in the process of finalizing regulations under the CCPA regarding the use of automated decision-making. Such additional regulations may impact our ability to develop, use, and commercialize AI technologies in the future. Apart from the U.S., policymakers in key jurisdictions, such as the EU, are actively working on legislation and regulations to encourage the development and use of ethical and safe AI technologies. For example, on May 21, 2024, the European Union legislators approved the EU Artificial Intelligence Act (“EU AI Act”), which establishes a comprehensive, risk-based governance framework for AI in the EU market. The EU AI Act enters into force on August 2, 2024, and the majority of the substantive requirements will apply from August 2, 2026. The EU AI Act will apply to companies that develop, use, and/or provide AI in the EU and includes requirements around transparency, conformity assessments and monitoring, risk assessments, human oversight, security, accuracy, general purpose AI, and foundation models, and proposes fines for breach of up to 7% of worldwide annual turnover. In addition, on September 28, 2022, the European Commission proposed two Liability Directives seeking to establish a harmonized civil liability regime for AI in the EU in order to facilitate civil claims in respect of harm caused by AI and to include AI-enabled products within the scope of the EU’s existing strict product liability regime. These Liability Directives were published in the Official Journal of the EU on July 12, 2024, and entered into force on August 1, 2024. The EU AI Act and the Liability Directives will have a material impact on the way AI is regulated in the EU. Recent case law from the CJEU has also taken an expansive view of the scope of the GDPR’s requirements around automated decision-making and introduced uncertainty in the interpretation of these rules. The EU AI Act and developing interpretation and application of the GDPR in respect of automated decision-making, together with developing guidance and/or decisions in this area, may affect our use of AI technologies and our ability to provide, improve, or commercialize our business, require additional compliance measures and changes to our operations and processes, result in increased compliance costs and potential increases in civil claims against us, and could adversely affect our business, financial condition, or results of operations. Other jurisdictions where we operate have already or are also expected to introduce guidelines and regulations around the use of AI within the next few years. The regulations may impose onerous obligations and may require us to rework or reevaluate improvements to be compliant, potentially increasing costs. A breach or failure in our security measures could occur from a variety of circumstances and events, including third-party action, employee negligence or error, malfeasance, computer viruses, cyberattacks, or ransom-related attacks by computer hackers, failures during the process of upgrading or replacing software and databases, power outages, hardware failures, telecommunication failures, user errors, or catastrophic events, and any of the foregoing events could have a material adverse 41 41 41 Table of Contents Table of Contents effect on our business, financial condition, or results of operations. For more information on risks associated with the processing of confidential and sensitive information, including personal information, refer to our risk factor titled “Information technology system failures, cyberattacks, or deficiencies in our cybersecurity could harm our business, customer relations, financial condition, or results of operations.” Though we have taken steps to be thoughtful in our development, training, and implementation of machine learning, including taking steps to comply with the laws and frameworks discussed above that are currently in effect, our machine learning-related processing could pose certain risks to our customers, including patients, clinicians, and healthcare institutions, and it is not guaranteed that regulators will agree with our approach to limiting these risks or to our compliance more generally. Risks can include, but are not limited to, the potential for errors or inaccuracies in the algorithms or models used by the MLMDs, the potential for bias or inaccuracies in the data used to train the MLMDs, the potential for improper processing of personal information that could lead to deprecation of our algorithms, and the potential for cybersecurity breaches that could compromise patient data or device functionality. Such risks could negatively affect the performance of our products, services, and business, as well as our reputation and the reputations of our customers, and we could incur liability through the violation of laws or contracts to which we are a party or civil claims.