high match confidence
Sentence-level differences:
- Reworded sentence: "Policies we adopt or choose not to adopt on social and ethical issues, especially regarding the use of our products, may be unpopular with some of our employees or with our customers or potential customers, and have in the past impacted and may in the future impact our ability to attract or retain employees and customers."
- Reworded sentence: "Further, actions taken by our customers and employees, including through the use or misuse of our products or new technologies for illegal activities or improper information sharing, may result in reputational harm or possible liability, particularly in light of upcoming regulatory requirements like the Digital Services Act (“DSA”) from the EU."
- Reworded sentence: "We are increasingly building AI into many of our offerings, including generative AI."
- Reworded sentence: "If we enable or offer solutions that draw controversy due to their perceived or actual impact on human rights, privacy, employment, or in other social contexts, we may experience new or enhanced governmental or regulatory scrutiny, brand or reputational harm, competitive harm or legal liability."
- Reworded sentence: "This in turn could undermine confidence in the decisions, predictions, analysis or other content that our AI applications produce, subjecting us to competitive harm, legal liability and brand or reputational harm."
Current (2024):
Policies we adopt or choose not to adopt on social and ethical issues, especially regarding the use of our products, may be unpopular with some of our employees or with our customers or potential customers, and have in the past impacted and may in the future impact our ability…
Read full text
Policies we adopt or choose not to adopt on social and ethical issues, especially regarding the use of our products, may be unpopular with some of our employees or with our customers or potential customers, and have in the past impacted and may in the future impact our ability to attract or retain employees and customers. We also may choose not to conduct business with potential customers or discontinue or not expand business with existing customers due to these policies. Further, actions taken by our customers and employees, including through the use or misuse of our products or new technologies for illegal activities or improper information sharing, may result in reputational harm or possible liability, particularly in light of upcoming regulatory requirements like the Digital Services Act (“DSA”) from the EU. For example, we have been subject to allegations 22 22 22 Table of Contents Table of Contents in legal proceedings that we should be liable for the use of certain of our products by third parties. Although we believe that we have a strong defense against these allegations, legal proceedings can be lengthy, expensive and disruptive to our operations and the outcome of any claims or litigation, regardless of the merits, is inherently uncertain. Regardless of outcome, these types of claims could cause reputational harm to our brand or result in liability. We are increasingly building AI into many of our offerings, including generative AI. As with many innovations, AI and our Customer 360 platform present additional risks and challenges that could affect their adoption and therefore our business. For example, the development of AI and Customer 360, the latter of which provides information regarding our customers’ customers, presents emerging ethical issues. If we enable or offer solutions that draw controversy due to their perceived or actual impact on human rights, privacy, employment, or in other social contexts, we may experience new or enhanced governmental or regulatory scrutiny, brand or reputational harm, competitive harm or legal liability. Data practices by us or others that result in controversy could also impair the acceptance of AI solutions. This in turn could undermine confidence in the decisions, predictions, analysis or other content that our AI applications produce, subjecting us to competitive harm, legal liability and brand or reputational harm. The rapid evolution of AI will require the application of resources to develop, test and maintain our products and services to help ensure that AI is implemented ethically in order to minimize unintended, harmful impact. Uncertainty around new and emerging AI applications such as generative AI content creation will require additional investment in the licensing or development of proprietary datasets, machine learning models and systems to test for accuracy, bias and other variables, which are often complex, may be costly and could impact our profit margin. Moreover, the move from AI content classification to AI content generation through our development of Einstein GPT and other generative AI products brings additional risks and responsibility. Known risks of generative AI currently include risks related to accuracy, bias, toxicity, privacy and security and data provenance. For example, AI technologies, including generative AI, may create content that appears correct but is factually inaccurate or flawed, or contains copyrighted or other protected material, and if our customers or others use this flawed content to their detriment, we may be exposed to brand or reputational harm, competitive harm and/or legal liability. Developing, testing and deploying AI systems may also increase the cost profile of our offerings due to the nature of the computing costs involved in such systems. If we are unable to mitigate these risks, or if we incur excessive expenses in our efforts to do so, our reputation, business, operating results and financial condition may be harmed.
View prior text (2023)
Policies we adopt or choose not to adopt on social and ethical issues, especially regarding the use of our products, may be unpopular with some of our employees or with our customers or potential customers, which has in the past impacted and may in the future impact our ability to attract or retain employees and customers. We also may choose not to conduct business with potential customers or discontinue or not expand business with existing customers due to these policies. Further, actions taken by our customers and employees, including through the use or misuse of our products or new technologies for illegal activities or improper information sharing, may result in reputational harm or possible liability. For example, we have been subject to allegations in legal proceedings that we should be liable for the use of certain of our products by third parties. Although we believe that such claims lack merit, legal proceedings can be lengthy, expensive and disruptive to our operations and the outcome of any claims or litigation, regardless of the merits, is inherently uncertain. Regardless of outcome, these types of claims could cause reputational harm to our brand or result in liability. We are increasingly building AI into many of our offerings. As with many innovations, AI and our Customer 360 platform present additional risks and challenges that could affect their adoption and therefore our business. For example, the development of AI and Customer 360, the latter of which provides information regarding our customers’ customers, presents emerging ethical issues. If we enable or offer solutions that draw controversy due to their perceived or actual impact on human rights, privacy, employment, or in other social contexts, we may experience brand or reputational harm, competitive harm or legal liability. Data practices by us or others that result in controversy could also impair the acceptance of AI solutions. This in turn could undermine the decisions, predictions or analysis AI applications produce, subjecting us to competitive harm, legal 22 22 22 Table of Contents Table of Contents liability and brand or reputational harm. The rapid evolution of AI will require the application of resources to develop, test and maintain our products and services to help ensure that AI is implemented ethically in order to minimize unintended, harmful impact. Uncertainty around new and emerging AI applications such as generative AI content creation may require additional investment in the development of proprietary datasets, machine learning models and systems to test for accuracy, bias and other variables, which are often complex, may be costly and could impact our profit margin if we decide to expand generative AI into our product offerings. Developing, testing and deploying AI systems may also increase the cost profile of our offerings due to the nature of the computing costs involved in such systems.