What Are Companies Expressing in Risk Factors?

While the media has been buzzing about the emergence of new forms of artificial intelligence and machine-learning systems (AI), numerous companies are carefully assessing their integration into their operations. To a possibly greater extent than with previous technological advancements, these companies need to analyze the significance of risks and uncertainties posed by AI. Forbes recently highlighted the top five risks associated with generative AI that business leaders should remain vigilant about: the risk of disruption, cybersecurity risk, reputational risk, legal risk, and operational risk.

Several companies have begun addressing the implications of AI in their recent 10-K and 10-Q filings, although the count remains below 10% among major indices like the S&P 500 and Russell 3000. These pioneering companies span a diverse range of industries, including vehicle automation, technology, biomedical-pharmaceutical, healthcare, software, retail, insurance, consumer finance/lending, banking, credit card/payment, asset management, online education, social media, gaming, hiring, workforce management, search engines, digital services, agriculture, data science, and more.

Certain companies have included distinct risk factors, as illustrated by the following examples from Meta’s latest 10-Q:

“We may not be successful in our artificial intelligence initiatives, which could adversely affect our business, reputation, or financial results.

We are investing substantially in artificial intelligence (AI) initiatives, including recommending relevant unconnected content across our products, enhancing our advertising tools, and creating new product features using generative AI. Particularly, we anticipate that these AI initiatives will necessitate increased investment in infrastructure and headcount. AI technologies are intricate and rapidly evolving, subjecting us to formidable competition from other firms and a continually changing regulatory landscape. These endeavors, encompassing new product introductions or modifications to existing ones, might result in new or heightened governmental or regulatory scrutiny, legal disputes, ethical concerns, or other complexities that could adversely impact our business, reputation, or financial outcomes. For example, the datasets used to develop AI models, the content generated by AI systems, or the application of AI systems could be deemed inadequate, offensive, biased, harmful, or in violation of present or future laws and regulations. Additionally, the reception of AI technologies in the market remains uncertain, and our product development efforts might prove unsuccessful. Any of these elements could have detrimental effects on our business, reputation, or financial standing.”

Other companies variously discuss AI within broader risk disclosures, touching on topics such as:

Uncertain success of new platforms or products incorporating AI. Rising competition due to the introduction of new technologies, including AI, that might make a firm’s products or services obsolete. Possible failures in integrating AI into business systems, including bugs, vulnerabilities, or algorithmic flaws that are not easily discernible. Cybersecurity risks, including targeted, automated, and coordinated attacks, and unauthorized utilization of AI tools that could compromise operations, accessibility, and security of company or customer data, including sensitive information. Potential legal or reputational harms due to inadequate or biased data, unintentional bias, or discrimination stemming from AI usage, or unauthorized use of AI tools, along with any resulting negative publicity or public perception. Potential exposure of confidential or proprietary information through the use of AI-based software by employees, vendors, suppliers, contractors, consultants, or third parties. Heightened risks of cyberattacks or data breaches due to the utilization of AI in launching more automated, targeted, and coordinated attacks, coupled with AI’s vulnerability to cybersecurity threats. The challenge of attracting and retaining employees with AI expertise or competing for talent via AI tools. Uncertainties in case law and regulations concerning intellectual property ownership and license rights for AI output, resulting in risks related to protecting underlying intellectual property and inadvertent infringement. Possible necessity to adapt business practices to adhere to U.S. and non-U.S. laws and regulations, such as privacy laws, that pertain to AI use in products or services, as stipulated by regulators, industry guidelines, and more. Like any other risks, companies should integrate AI into their enterprise risk management systems if it’s deemed significant, along with their disclosure controls and procedures.

EXAMPLES OF DISTINCT AI RISK FACTORS

META PLATFORMS: “We might not achieve success in our initiatives related to artificial intelligence, which could have negative consequences on our business, reputation, or financial performance.

We are making substantial investments in artificial intelligence (AI) initiatives, including the creation of generative AI-powered content recommendations across our products, enhancing our advertising tools, and developing new product features through generative AI. Specifically, we anticipate that these AI initiatives will necessitate increased investments in infrastructure and staffing. AI technologies are complex and rapidly evolving, and we face significant competition from other companies, along with a continuously evolving regulatory environment. These efforts, including the introduction of new products or modifications to existing ones, could lead to enhanced governmental or regulatory scrutiny, litigation, ethical concerns, or other complications that could adversely affect our business, reputation, or financial results. For instance, the use of datasets for AI model development, content generated by AI systems, or the application of AI systems could be deemed insufficient, offensive, biased, harmful, or in violation of current or future laws and regulations. Furthermore, market acceptance of AI technologies remains uncertain, and our product development endeavors might not yield the desired outcomes. Any of these factors could have adverse effects on our business, reputation, or financial performance.”

DOORDASH: “We may integrate artificial intelligence into our business, and challenges in effectively managing its use could lead to reputation damage, competitive setbacks, legal liability, and negative impacts on our financial performance.

We may incorporate artificial intelligence (“AI”) solutions into our platform, services, and features, and these applications may become essential to our operations over time. Our competitors or third parties might adopt AI into their products more rapidly or effectively, potentially hindering our competitive edge and affecting our financial results. Additionally, if AI applications produce deficient, inaccurate, or biased content, analyses, or recommendations, it could harm our business, financial condition, and results of operations. The use of AI applications has previously resulted in cybersecurity incidents involving end user personal data. Any cybersecurity incidents tied to our use of AI applications could adversely affect our reputation and financial performance. AI also presents ethical concerns, and if our AI usage becomes a subject of controversy, it could lead to brand damage, reputational harm, legal liabilities, or competitive challenges. The rapid evolution of AI, along with potential government regulation, demands substantial resources for ethical AI implementation, aiming to minimize unintended negative consequences.”

PLANET LABS: “Issues tied to the utilization of artificial intelligence, including machine learning and computer vision (collectively referred to as AI), in our geospatial data and analytics platforms could lead to reputation damage or legal liabilities.

AI is integrated into some of our geospatial data and analytics platforms and is an expanding component of our business offerings. Similar to any nascent technology, AI poses risks and challenges that could impact its further development, use, and thus, our business. AI algorithms might prove flawed, data sets inadequate or biased, and improper data practices by data scientists or end-users could hinder the acceptance of AI solutions. Inaccurate or deficient analyses produced by AI applications could lead to competitive harm, legal liability, and reputational or brand damage. Certain AI scenarios also entail ethical considerations. Our enabling or provision of AI solutions that provoke controversy regarding their impact on financial conditions, operations, or societal matters might result in competitive harm, legal repercussions, or brand and reputational damage. The adoption of AI in our platforms also introduces technical complexity and specialized expertise requirements. Any disruptions or failures in our AI systems or infrastructure could delay operations and harm our financial performance.”

LEMONADE: “Malfunctions or deviations from expectations in our proprietary artificial intelligence algorithms could lead to underwriting inappropriate policies, incorrect pricing, or overpayment of customer claims. Additionally, these algorithms may inadvertently generate bias and discrimination.

We rely on data collected from insurance applications to determine policy writing and pricing. Our proprietary AI algorithms also process numerous claims. The data acquired through customer interactions is evaluated and managed by these algorithms. The ongoing development, upkeep, and operation of our complex backend data analytics engine is resource-intensive and complicated, potentially leading to significant performance issues or undetected defects, especially with new AI-powered features. We could encounter technical hurdles, and unforeseen problems may hinder the proper operation of our algorithms. Any malfunction could lead to incorrect pricing, claims approval, or denial, causing customer dissatisfaction, policy cancellations, and potential underpricing or overpayment. Moreover, our proprietary AI algorithms might produce unintentional bias and discrimination during underwriting, potentially leading to legal or regulatory liabilities. Legislative and regulatory bodies increasingly concern themselves with AI in underwriting. For instance, departments like the California and Connecticut Departments of Insurance have issued bulletins on AI and big data use. We can’t anticipate the limitations authorities may impose on AI usage. These issues could seriously harm our business, operations, and financial health.”

YEXT: “We are incorporating generative artificial intelligence (AI) into some products, which poses compliance and reputation risks due to the novel and evolving nature of this technology.

We have incorporated various generative AI features into our products. This emergent technology, in its early stages of commercial use, comes with inherent risks. AI relies on machine learning and predictive analytics, which may introduce unintended biases and discriminatory outcomes. Despite measures to address algorithmic bias, such as testing and data source reviews, the potential for AI algorithms to generate inaccurate or objectionable results remains. In addition, technical complexity and specialized expertise are prerequisites for AI deployment. Any disruptions or failures in our AI systems could lead to operational errors. Third-party generative AI algorithms might produce misleading or inappropriate content, harming our reputation, business, and customers. The use of AI also raises ethical concerns. Controversial AI usage affecting financial conditions, operations, or societal matters could lead to reputational harm. Anticipating future AI regulations is challenging. Governments might limit or regulate AI, potentially affecting our product efficiency and usability for extended periods.”

ZIPRECRUITOR: “Issues arising from artificial intelligence (including machine learning) within our marketplace may result in reputational damage, legal liabilities, or adverse effects on our business, or new regulations that restrict AI use.

AI is integrated into our marketplace, playing a significant role. Like all developing technologies, AI presents potential risks that could influence its development, use, and our business. AI algorithms could be flawed, data sets may be biased, or data practices controversial. Missteps in AI acceptance and inadequate analyses could lead to competitive harm, legal liabilities, and brand damage. Ethical concerns also surround certain AI scenarios. Introducing or offering AI solutions that provoke controversy due to societal impacts could tarnish our reputation. Moreover, we anticipate the advent of new laws and regulations for AI. Governments may restrict AI use, which might hinder or impair our product efficacy and efficiency.”

EVENTBRITE: “We are integrating generative artificial intelligence (AI) into some products, which comes with operational and reputation risks due to its evolving nature.

We have incorporated third-party generative AI elements into our products. As this technology is still in its early commercial stage, it carries inherent risks. AI relies on machine learning and predictive analytics, which could result in accuracy issues, unintended biases, and discriminatory outcomes. We have introduced measures, like in-product disclosures, but potential exists for third-party AI algorithms to produce inaccurate or misleading content. Any content or behavior produced, including hallucinatory behavior, might damage our reputation, business, and customers. Moreover, AI deployment involves technical complexity and specialized skills. Any disruptions or failures in our AI systems could disrupt operations. Our AI usage spans various applications, so the potential for controversial use exists. If our AI application becomes controversial due to financial or operational impacts, we could face brand harm. The rapid AI evolution requires substantial resources to ethically implement AI and minimize unintended consequences.”

BIOGEN: “The increased use of social media and artificial intelligence-based software introduces new risks and challenges.

Social media is increasingly utilized to discuss our products and the diseases they address. Regulations concerning such use are uncertain, leading to compliance risks and potential noncompliance penalties. Patient use of social media for product efficacy comments or alleged adverse event reports could lead to failure to monitor and comply with reporting obligations, or inability to address market pressures and political influences, impacting our products’ public perception. Also, the use of AI-based software is on the rise. This may inadvertently lead to confidential information releases, affecting our intellectual property’s value.”

FLEXSHOPPER: “If we cannot enhance our artificial intelligence (AI) models or if our models contain errors, our growth potential, business, financial standing, and results of operations could suffer.

Effective credit evaluation is crucial for attracting customers and facilitating loans on our platform. Automation is also critical for efficient operations. If AI models don’t accurately predict borrower creditworthiness, this could lead to higher losses. Flawed models could also lead to incorrect pricing or loan approval decisions. The automation also extends to other facets of lending, such as fraud detection. If AI models underperform or prove inaccurate, sub-optimal decisions and operational or strategic errors could occur, leading to business, financial, and operational harm.”

AEYE: “Our reliance on deterministic artificial intelligence-driven sensing systems for ADAS technology poses a risk to our business if not selected by automotive OEMs or their suppliers.

Automotive OEMs and suppliers undergo extended testing before selecting products like our active lidar systems. Our products must meet specifications beyond our control. Failure to achieve design wins or being unsuccessful in a model could affect our business. Non-selection for one model might impact other models. Additionally, the uncertainty of lidar’s market adoption could harm our business, especially if it does not develop as expected or gains acceptance more slowly. Our diversification efforts into aerospace, defense, and other sectors carry distinct risks, as these markets have specific requirements.”

“Reliance on certain artificial intelligence and machine learning models introduces risks associated with design flaws, inadequate data, or lack of rights, potentially harming performance, reputation, and incurring legal liabilities.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This field is required.

This field is required.

Keep up with the most recent legal news

Sign up for the monthly newsletter