high match confidence
Sentence-level differences:
- Reworded sentence: "Social, ethical, security and regulatory issues relating to the use of new and evolving technologies such as AI and ML, including generative AI and AI models, in our offerings, internal operations or partnerships, may result in reputational harm and liability, and may cause us to incur additional research and development costs to resolve such issues."
- Reworded sentence: "AI and ML also make it cheaper for attackers to create malware, phishing, code reviews, or other tools at a high volume."
- Reworded sentence: "Moreover, use of generative AI in our code development process, while offering various potential benefits, could also pose certain ownership and security risks with respect to our codebase, given the current legal uncertainties relating to ownership of AI- or ML-generated works and the potential for security flaws in output code."
- Reworded sentence: "In the United States, a number of civil lawsuits have been initiated related to the foregoing and other concerns, the outcome of any one of which 43 43 43 Table of Contents Table of Contents may, amongst other things, require us to limit the ways in which we use AI in our business."
- Reworded sentence: "Our business, financial condition and results of operations may be adversely affected if content, recommendations, solutions or features or their development or deployment (including the collection, use, or other processing of data used to train or create such AI and ML solutions or features) are found to have or alleged to have infringed upon or misappropriated third-party intellectual property rights or violate applicable laws, regulations, or other actual or asserted legal obligations to which we are or may become subject."
Current (2026):
Social, ethical, security and regulatory issues relating to the use of new and evolving technologies such as AI and ML, including generative AI and AI models, in our offerings, internal operations or partnerships, may result in reputational harm and liability, and may cause us…
Read full text
Social, ethical, security and regulatory issues relating to the use of new and evolving technologies such as AI and ML, including generative AI and AI models, in our offerings, internal operations or partnerships, may result in reputational harm and liability, and may cause us to incur additional research and development costs to resolve such issues. The use of generative and agentic AI, relatively new and emerging technologies in the early stages of commercial use, exposes us to additional risks, such as damage to our reputation, competitive position, and business, legal and regulatory risks and additional costs. For example, generative AI has been known to produce false or “hallucinatory” inferences or output, and certain generative AI uses machine learning and predictive analytics, which can create inaccurate, incomplete, or misleading content, unintended biases, and other discriminatory or unexpected results, errors or inadequacies, any of which may not be easily detectable by us or any of our related service providers. As with many innovations, AI and ML presents risks and challenges that could affect its adoption, and therefore our business. Our offerings based on AI and ML may expose us to additional claims, demands and proceedings by private parties and regulatory authorities and subject us to legal liability as well as brand and reputational harm. If we enable or offer solutions that draw controversy due to their perceived or actual impact on society or if the content, analyses or recommendations that AI and ML-powered solutions assist in producing in our products and services are, or are perceived to be, deficient, insufficient, inaccurate, biased, unethical or otherwise flawed, we may experience brand or reputational harm, competitive harm or legal liability. Potential government regulation related to AI and ML use and ethics may also increase the burden and cost of research and development in this area, and failure to properly remediate AI and ML usage or ethics issues may cause public confidence in AI and ML to be undermined. The rapid evolution of AI and ML will require the application of resources to develop, test and maintain any potential offerings or partnerships to help ensure that AI and ML are implemented ethically in order to minimize unintended, harmful impact. In addition, the use of AI and ML may enhance intellectual property, cybersecurity, operational and technological risks. Examples of heightened cybersecurity threats emerging from the rise of generative AI include the use of deepfake technologies in phishing and social engineering attacks, and more sophisticated malware that can evade conventional detection tools. AI and ML also make it cheaper for attackers to create malware, phishing, code reviews, or other tools at a high volume. AI and ML may change the way our industry identifies and responds to cybersecurity threats, and businesses that are slow to adopt or fail to adopt such new technologies may face a competitive disadvantage. The use of AI also has resulted in, and may in the future result in, security breaches or other security incidents that implicate the personal data of users of AI-powered applications. If we experience security breaches or incidents in connection with our use of AI and ML, it could adversely affect our reputation and expose us to legal liability or regulatory risk. As the regulatory framework for AI, ML and related technologies evolves, it is possible that new laws and regulations will be adopted, or that existing laws and regulations, including intellectual property, privacy, data protection and cybersecurity, consumer protection, competition and equal opportunity laws and regulations, may be interpreted in ways that would affect our business and the ways in which we or our partners use AI, ML or related technologies, our financial condition and our results of operations, including as a result of the cost to comply with such laws or regulations. For example, if we or our third-party AI providers do not have sufficient rights to use the data or other material or content on which AI tools rely, or the output generated by our use of such AI tools, we may incur liability through the violation of such laws, third-party intellectual property, privacy or other rights, or contracts to which we are a party. Moreover, use of generative AI in our code development process, while offering various potential benefits, could also pose certain ownership and security risks with respect to our codebase, given the current legal uncertainties relating to ownership of AI- or ML-generated works and the potential for security flaws in output code. If any of our employees, contractors, vendors or service providers use any third-party AI-powered software in connection with our business or the services they provide to us, it may lead to the inadvertent disclosure of our confidential information, including inadvertent disclosure of our confidential information into publicly available third-party training sets, which may impact our ability to realize the benefit of, or adequately maintain, protect and enforce our intellectual property or confidential information, harming our competitive position and business. Our ability to mitigate risks associated with disclosure of our confidential information, including in connection with AI and ML-powered software, will depend on our implementation, maintenance, monitoring and enforcement of appropriate technical and administrative safeguards, policies and procedures governing the use of AI and ML in our business. Moreover, any content created by us using generative AI tools may not be subject to copyright protection which may adversely affect our intellectual property rights in, or ability to commercialize or use, any such content. In the United States, a number of civil lawsuits have been initiated related to the foregoing and other concerns, the outcome of any one of which 43 43 43 Table of Contents Table of Contents may, amongst other things, require us to limit the ways in which we use AI in our business. For example, the output produced by generative AI tools may include information subject to certain rights of publicity or privacy laws or constitute an unauthorized derivative work of the copyrighted material used in training the underlying AI model, any of which could also create a risk of liability for us, or adversely affect our business or operations. Our business, financial condition and results of operations may be adversely affected if content, recommendations, solutions or features or their development or deployment (including the collection, use, or other processing of data used to train or create such AI and ML solutions or features) are found to have or alleged to have infringed upon or misappropriated third-party intellectual property rights or violate applicable laws, regulations, or other actual or asserted legal obligations to which we are or may become subject. Further, potential government regulation related to AI and ML use and ethics may also increase the burden and cost of research and development in this area, and failure to properly remediate AI and ML usage or ethics issues may cause public confidence in AI and ML to be undermined, which could slow adoption of AI and ML in our products and services. For example, the European Union’s Artificial Intelligence Act (the “EU AI Act”) entered into force on August 1, 2024, with the first set of provisions taking effect on February 2, 2025. The EU AI Act establishes, among other things, a risk-based governance framework for regulating AI systems operating in the EU. This framework categorizes AI systems, based on the risks associated with such AI systems’ intended purposes, as creating unacceptable or high risks, with all other AI systems being considered low risk. The EU AI Act prohibits certain uses of AI systems and places numerous obligations on providers and deployers of permitted AI systems, with heightened requirements based on AI systems that are general purpose, which we currently have, or considered high risk. In addition, a patchwork of state-level AI-related legislation continues to emerge in the United States. We may not be able to anticipate how to respond to these rapidly evolving frameworks, and we may need to expend resources to adjust our operations or offerings in certain jurisdictions if the legal frameworks are inconsistent across jurisdictions. Furthermore, because AI and ML technology itself is highly complex and rapidly developing, it is not possible to predict all of the legal, operational or technological risks that may arise relating to the use of AI and ML. These and other developments may require us to make significant changes to our use of AI and ML, including by limiting or restricting our use of AI and ML, which may require us to make significant changes to our policies and practices. The cost to comply with such laws or regulations could be significant and would increase our operating expenses, which could adversely affect our business, financial condition and results of operations. As the utilization of AI and ML becomes more prevalent, we anticipate that it will continue to present new or unanticipated ethical, reputational, technical, operational, legal, competitive and regulatory issues, among others. We expect that our incorporation of AI and ML in our business will require additional resources, including the incurrence of additional costs, to develop and maintain our products and solutions and features to minimize potentially harmful or unintended consequences, to comply with applicable and emerging laws and regulations, to maintain or extend our competitive position, and to address any ethical, reputational, technical, operational, legal, competitive or regulatory issues which may arise as a result of any of the foregoing. As a result, the challenges presented with our use of AI and ML could adversely affect our business, financial condition and results of operations.
View prior text (2025)
Social, ethical, security and regulatory issues relating to the use of new and evolving technologies such as AI, including generative AI, in our offerings or partnerships, may result in reputational harm and liability, and may cause us to incur additional research and development costs to resolve such issues. As with many innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. No assurance can be provided that our use of AI will enhance our products or services or produce the intended results. For example, AI algorithms may be flawed, insufficient, of poor quality, reflect unwanted forms of bias, or contain other errors or inadequacies, any of which may not be easily detectable and AI, particularly generative AI, has been known to produce false or “hallucinatory” inferences or outputs. If we enable or offer solutions that draw controversy due to their perceived or actual impact on society or if the content, analyses or recommendations that AI-powered solutions assist in producing in our products and services are, or are perceived to be, deficient, inaccurate, biased, unethical or otherwise flawed, we may experience brand or reputational harm, competitive harm or legal liability. Potential government regulation related to AI use and ethics may also increase the burden and cost of research and development in this area, and failure to properly remediate AI usage or ethics issues may cause public confidence in AI to be undermined. AI and machine learning may change the way our industry identifies and responds to cybersecurity threats, and businesses that are slow to adopt or fail to adopt such new technologies may face a competitive disadvantage. Our competitors or other third parties may incorporate AI into their products more quickly or more successfully than us, which could impair our ability to compete effectively. The rapid evolution of AI will require the application of resources to develop, test and maintain any potential offerings or partnerships to help ensure that AI is implemented ethically in order to minimize unintended, harmful impact. In addition, the use of AI may enhance intellectual property, cybersecurity, operational and technological risks. Examples of heightened cybersecurity threats emerging from the rise of generative AI include the use of deepfake technologies in phishing and social engineering attacks, and more sophisticated malware that can evade conventional detection tools. AI also makes it cheaper for attackers to create malware, phishing, code reviews, or other tools at a high volume. As the regulatory framework for AI and related technologies evolves, it is possible that new laws and regulations will be adopted, or that existing laws and regulations, including intellectual property, privacy, data protection and cybersecurity, consumer protection, competition and equal opportunity laws and regulations, may be interpreted in ways that would affect our business and the ways in which we or our partners use AI or related technologies, our financial condition and our results of operations, including as a result of the cost to comply with such laws or regulations. For example, if we or our third-party AI providers do not have sufficient rights to use the data or other material or content on which AI tools rely, or the output generated by our use of such AI tools, we may incur liability through the violation of such laws, third-party intellectual property, privacy or other rights, or contracts to which we are a party. Moreover, use of generative AI in our code development process, while offering various potential benefits, could also pose certain ownership and security risks with respect to our codebase, given the current legal uncertainties relating to ownership of AI-generated works and the potential for security flaws in output code. If any of our employees, contractors, vendors or service providers use any third-party AI- 42 42 42 Table of Contents Table of Contents powered software in connection with our business or the services they provide to us, it may lead to the inadvertent disclosure of our confidential information, including inadvertent disclosure of our confidential information into publicly available third-party training sets, which may impact our ability to realize the benefit of, or adequately maintain, protect and enforce our intellectual property or confidential information, harming our competitive position and business. Our ability to mitigate risks associated with disclosure of our confidential information, including in connection with AI-powered software, will depend on our implementation, maintenance, monitoring and enforcement of appropriate technical and administrative safeguards, policies and procedures governing the use of AI in our business. Moreover, any content created by us using generative AI tools may not be subject to copyright protection which may adversely affect our intellectual property rights in, or ability to commercialize or use, any such content. In the United States, a number of civil lawsuits have been initiated related to the foregoing and other concerns, the outcome of any one of which may, amongst other things, require us to limit the ways in which we use AI in our business. For example, the output produced by generative AI tools may include information subject to certain rights of publicity or privacy laws or constitute an unauthorized derivative work of the copyrighted material used in training the underlying AI model, any of which could also create a risk of liability for us, or adversely affect our business or operations. In addition, the use of AI has resulted in, and may in the future result in, security breaches or other security incidents that implicate the personal data of users of AI-powered applications. If we experience security breaches or incidents in connection with our use of AI, it could adversely affect our reputation and expose us to legal liability or regulatory risk. Further, potential government regulation related to AI use and ethics may also increase the burden and cost of research and development in this area, and failure to properly remediate AI usage or ethics issues may cause public confidence in AI to be undermined, which could slow adoption of AI in our products and services. For example, the EU AI Act was published in the Official Journal of the EU on July 12, 2024 and entered into force on August 1, 2024. The EU AI Act establishes, among other things, a risk-based governance framework for regulating AI systems operating in the EU. This framework categorizes AI systems, based on the risks associated with such AI systems’ intended purposes, as creating unacceptable or high risks, with all other AI systems being considered low risk. The EU AI Act prohibits certain uses of AI systems and places numerous obligations on providers and deployers of permitted AI systems, with heightened requirements based on AI systems that are considered high risk. We may not be able to anticipate how to respond to these rapidly evolving frameworks, and we may need to expend resources to adjust our operations or offerings in certain jurisdictions if the legal frameworks are inconsistent across jurisdictions. Furthermore, because AI technology itself is highly complex and rapidly developing, it is not possible to predict all of the legal, operational or technological risks that may arise relating to the use of AI. The cost to comply with such laws or regulations could be significant and would increase our operating expenses, which could adversely affect our business, financial condition and results of operations. As the utilization of AI becomes more prevalent, we anticipate that it will continue to present new or unanticipated ethical, reputational, technical, operational, legal, competitive and regulatory issues, among others. We expect that our incorporation of AI in our business will require additional resources, including the incurrence of additional costs, to develop and maintain our products and solutions and features to minimize potentially harmful or unintended consequences, to comply with applicable and emerging laws and regulations, to maintain or extend our competitive position, and to address any ethical, reputational, technical, operational, legal, competitive or regulatory issues which may arise as a result of any of the foregoing. As a result, the challenges presented with our use of AI could adversely affect our business, financial condition and results of operations.