What Startups Should Know About Early-Stage Equity Term Sheets

What Startups Should Know About Early-Stage Equity Term Sheets

If your company is preparing to raise its first round of institutional capital, congratulations. That’s a major milestone. At this stage, it’s critical to ensure that both you and your prospective investors are aligned on the structure and terms of the investment. That’s where a term sheet comes in.

In this comprehensive guide,  we’ll explore:

  • Why the name of the financing round (for example, “seed” vs. “Series A”) matters
  • Key differences between Seed and Series A financing
  • What is typically included in an early-stage equity financing term sheet
  • What founders should look out for when issuing preferred stock

Why the Name of the Round Matters

The name of your financing round is more than a formality, it sets expectations.

  • Seed Round: suggests your company is in the early stages of development, likely pre-revenue or just beginning to gain traction.
  • Series A Round: implies more maturity, such as a validated product, growing user base, and early revenue, indicating the company is ready to scale.

Investors often rely on these labels to assess your company’s progress, risk profile, and potential return. Choosing the right naming convention can help manage expectations and align your fundraising strategy with industry norms.

Seed vs. Series A Financing: Key Differences

A seed financing round typically follows friends-and-family or angel investments and represents the first institutional capital. Investors in these rounds may include angel investors, seed-stage venture firms, or angel groups. Key characteristics of a seed round include:

  • Smaller investment amounts (typically $500,000 to $2 million)
  • Simpler deal terms
  • Fewer governance rights and investor protections

Even if a seed round’s structure resembles a Series A, especially if you’re issuing equity rather than SAFEs or convertible notes, founders often prefer to use the “seed” label. Why? It allows the company to reserve the “Series A” designation for a future round, ideally at a higher valuation once the business has matured.

Series A Financing Overview

Series A financing generally follows a successful seed round and is triggered once a company has achieved meaningful traction, such as strong user growth, early revenue, or product-market fit. Series A rounds are typically:

  • Larger in size ($2 million to $15 million or more)
  • Led by institutional venture capital firms
  • Structured with more complex terms, such as:
    • Board representation
    • Liquidation preferences
    • Anti-dilution protections

At this stage, the company is transitioning from startup to scale-up.

What is Included in an Early-Stage Term Sheet?

While the specifics vary depending on investors and company leverage, most early-stage preferred stock term sheets include the following key provisions:

Price Per Share / Pre-Money Valuation

The term sheet will often include either a price per share or a pre-money valuation from which the price per share is calculated. While a higher valuation might seem ideal, it can also mean higher expectations, greater pressure to grow rapidly, and more difficult future funding rounds.

A higher valuation only benefits founders if it’s realistic and paired with founder-friendly terms. It’s essential to evaluate the entire term sheet, not just the headline valuation.

For more on this topic, check out our related advisory: Understanding Pre-Money vs. Post-Money Valuation.

Maximum Offering Amount

This is the total amount of capital the company aims to raise in the round. Along with the valuation, it helps investors calculate the ownership percentage they’ll receive if the round is fully subscribed. The number should balance capital needs, dilution tolerance, and investor interest.

Offering Period

The offering period defines the window in which shares may be sold, ensuring the financing doesn’t remain open indefinitely. This is important because the company’s valuation may change significantly over time.

Liquidation Preferences

In the event of a liquidity event such as a sale, IPO, or liquidation, preferred shareholders typically have two options:

  1. Receive their original investment before common shareholders receive any proceeds; or
  2. Convert their preferred shares to common stock and participate in the distribution on an as-converted basis.

In some cases, investors receive both: their initial investment plus a share in the remaining proceeds (“participating preferred” stock). While this was more common in earlier markets, non-participating preferred has become more standard in later-stage deals or more competitive fundraising environments.

Anti-Dilution Provisions

These provisions protect investors from dilution if the company raises capital in a down round, at a lower valuation than the previous round. Two common structures are:

  • Weighted Average: More founder-friendly; adjusts the conversion price proportionally.
  • Full Ratchet: More investor-protective; resets the conversion price to match the new round.

The terms used often reflects the company’s stage and the negotiating power of each party.

Board Representation

Investors often negotiate for the right to appoint one or more members to the company’s board of directors. It’s common for both preferred and common shareholders to appoint directors, with remaining board seats filled by mutual agreement.

Voting Rights

Term sheets typically specify how preferred shares vote, often alongside common shares on a one-vote-per-share basis. In certain cases, preferred holders may vote separately on key issues.

Some term sheets also include protective provisions that require majority approval from preferred shareholders or their board representatives. These provisions give preferred shareholders a type of veto power over certain company actions, such as issuing new equity, selling the company, or amending the certificate of incorporation.

Participation Rights

Preferred investors almost always receive a right of first refusal to purchase their pro-rata share in future fundraising rounds. This helps protect their ownership from dilution.

Other Common Term Sheet Provisions

Additional rights commonly found in  term sheets include:

  • Drag-along and Tag-along Rights – Govern how shareholders participate in company sales
  • Registration Rights – Important in the context of a future IPO
  • Dividend Rights – Determine how and when dividends are paid to preferred holders
  • Conversion Rights – Allow preferred shares to convert to common stock at a preset ratio

A full explanation of these terms is beyond the scope of this guide, but founders should be aware of their presence and potential impact.

Term sheets often include a provision requiring the company to reimburse the lead investor’s legal fees, usually subject to a cap (e.g., $25,000 to $50,000). This is standard practice and should be factored into your fundraising budget.

Final Thoughts for Founders

Term sheets for preferred stock equity financings can vary significantly based on the company’s stage, investor preferences, and market conditions. However, the concepts outlined in this guide represent the core elements that startups are likely to encounter when negotiating early-stage investment deals.

If you have questions about any of these provisions, or if you’re preparing to raise capital and need support negotiating a term sheet, reach out to a member of Varnum’s Venture Capital and Emerging Companies Practice Team. We’re well-equipped to provide strategic, practical advice tailored to your company’s goals.

First-of-Its-Kind: Teen Privacy Law Passes in Arkansas

Arkansas Expands Online Privacy Laws to Teens

On April 22, 2025, Arkansas enacted the Arkansas Children and Teens’ Online Privacy Protection Act (HB 1717, Act 952), making it the first state to expand core federal children’s privacy protections to teens. The law, effective July 1, 2026, applies to for-profit websites, online services, apps, and mobile applications that are directed to children (under 13) or teens (ages 13-16), or that have actual knowledge they are collecting personal information from these groups.

The Act establishes a two-tiered framework: parental consent is required to collect personal information from children, while either the teen or their parent may consent in the case of users aged 13 to 16. Operators must also provide clear notice of their data practices, respect deletion and correction requests, and implement reasonable security measures. The statute broadly defines personal information to include not only contact details and identifiers, but also biometric data, geolocation, and any information linked or reasonably linkable to a child, teen, or parent.

The law prohibits targeted advertising to minors using their personal information and limits data collection to what is necessary for the specific service or transaction. Operators are not required to implement age verification, but are expected to comply where they have actual knowledge of a user’s age. Importantly, enforcement authority is vested exclusively in the Arkansas Attorney General; the law does not create a private right of action.

HB 1717 reflects growing state-level momentum to address youth privacy concerns amid the absence of federal privacy reform. Businesses that operate online platforms accessible to Arkansas users, particularly those relying on personalized advertising or handling sensitive data, should evaluate their compliance posture now to prepare for the law’s 2026 effective date.

Varnum’s Data Privacy Practice Team is available to help your organization assess its obligations under Arkansas’ new law, align with regulatory requirements, and develop a compliant data strategy.

Connelly v. U.S.: A Reminder About Corporate Owned Life Insurance

Supreme Court Ruling Impacts Corporate-Owned Life Insurance

Many businesses have used corporate owned life insurance (COLI) and buy-sell agreements as key elements of their succession planning. However, it may be time to consider whether these programs are creating unnecessary risk. Although these programs generally have not been problematic in the past, a recent Supreme Court case has potentially changed the analysis.

COLI is a life insurance policy owned by the company on the life of an employee, with some or all the benefits payable to the company. This life insurance can provide a significant cash benefit at a time when the company may be looking to fund the repurchase of shares from a deceased owner. Historically, practitioners have excluded insurance proceeds from a business’s valuation when those proceeds are contractually designated for repurchasing shares under a buy-sell agreement. This exclusion arises because the buy-sell agreement creates a liability that offsets some or all of the proceeds.  Although excluding the value of the insurance proceeds from the value of the business was relatively common, the IRS had sometimes argued that the value of the insurance should be included in the value of the company.

In Connelly v. U.S., the Supreme Court unanimously held that the proceeds from COLI need to be included in some valuations of the company that received the proceeds. When the value of the COLI is added for tax and valuation purposes, there are several possible implications. First, the increased value from including the COLI may have to be reflected in a higher purchase obligation under the buy/sell obligation associated with the COLI than would otherwise be necessary. Second, the value of the COLI may need to be included in company valuations related to deferred and executive compensation payments. Third, the inclusion of COLI proceeds as an asset on the company’s balance sheet may impact the company’s investment or lending agreements. And fourth, the increased value of the company needs to be reflected when valuing the decedent’s company equity for estate tax purposes and in any tax planning for surviving owners.

After Connelly v. U.S., companies and business owners should reassess how COLI and buy-sell agreements interact. If a COLI and a buy-sell agreement are already in place, now is a good time to review them to determine if changes need to be made. If so, make those changes before it’s too late. For companies that do not have COLI and a buy-sell agreement in place, it is a good time to determine if your business should have these arrangements in place now that the Supreme Court has settled the question.

If you have questions about COLI, buy-sell agreements, and their implications for your business, contact a member of Varnum’s Employee Benefits, Corporate, or Estate Planning Practice Teams.

EGLE Expected to Lower PFAS Maximum Contaminant Levels in or before 2026

EGLE Plans Lower PFAS Limits Ahead of 2027 EPA Enforcement Deadline

Regulations of PFAS (per- and polyfluoroalkyl substances) have been evolving quickly, and more changes are on the way in Michigan. 

In 2020, Michigan established some of the nation’s first drinking water standards for PFAS, setting limits with Maximum Contaminant Levels (MCLs).[1] For example, the MCL for PFOA (perfluorooctanoic acid) is 8 parts per trillion (ppt) and PFOS (perfluorooctanesulfonic acid) is 16 ppt. 

However, federal regulations will cause EGLE (Department of Environment, Great Lakes, and Energy) to lower Michigan’s MCLs even more. In April 2024, the U.S. Environmental Protection Agency (EPA) passed a national drinking water MCL under the Safe Drinking Water Act, establishing a threshold for PFAS in drinking water of 4 ppt for five kinds of PFAS (including PFOA and PFOS), as well as regulating four PFAS collectively when present as a mixture.[2] These national MCLs will take effect in 2027.[3] In 2029, the national MCLs become enforceable, triggering penalties and increased monitoring frequencies.

In Michigan, the EPA has delegated its authority to the state to enforce the Safe Drinking Water Act.[4] This means that EGLE is the primary agency in charge of creating and enforcing drinking water regulations. Because this authority is delegated, state limits on contaminants in drinking water must be at least as restrictive as federal limits.[5] Thus, Michigan will need to enact new MCLs on or before 2027.

In recent conversations, EGLE has stated it intends to update the state’s MCLs on or before 2026, which will be at or below the EPA’s level of 4 ppt for each type of PFAS. However, EGLE will not start the process until the Michigan Supreme Court decides a case filed by 3M against the state, which challenges the state’s current MCLs. According to the state, the outcome of the case could dictate the process that EGLE needs to use for implementing the new MCLs and affect the timeline for the new rules. Either way, new rules are forthcoming.    

Any municipal or private subject to the Safe Drinking Water Act should be prepared to meet the forthcoming (lower) MCLs regulations. Varnum’s Environmental and Natural Resources Practice Team continues to monitor developments. Contact Kyle Konwinski, C.J. Biggs, or your Varnum environmental attorney today to learn more about the requirements of these upcoming PFAS regulations and how they may impact your business or organization.

FTC Signals Major Shift: Children’s Privacy a Top Enforcement Priority

FTC Prioritizes Children's Data Privacy

The Federal Trade Commission (FTC) has made clear that protecting children’s privacy is now a top enforcement priority under its new leadership. Recent statements from FTC Commissioner Melissa Holyoak reinforce that businesses handling children’s data should prepare for heightened regulatory scrutiny.

Speaking in Washington, D.C., during the International Association of Privacy Professionals (IAPP) Global Privacy Summit, Commissioner Holyoak emphasized that children’s privacy remains a significant enforcement priority for the FTC. Central to this renewed focus is the enforcement of the Children’s Online Privacy Protection Act (COPPA), with the agency already advancing significant updates to the rule.

The FTC is not the only regulatory body focusing on children’s privacy. Also speaking at the IAPP Global Privacy Summit, many representatives of state regulatory bodies focused on enforcing state data protection laws, similarly echoed the desire to focus on higher-risk activities. The focus includes paying closer attention to governing the processing of teens’ personal information online as well as practices related to the sale of data from minors. For companies that have historically leveraged the age threshold under COPPA for bifurcating data collection practices between children and others, this may pose a challenge as data related to teens is not within the scope of that original COPPA threshold but would be the focus of regulatory enforcement at the state level. Given the heightened focus regulators across the board are placing on this scenario, companies dealing with data related to teens should consider mapping out specific use cases where this may be an initial step, analyzing the potential risk of each individually.

Businesses that collect, use, or share children’s data should act now to evaluate and strengthen their privacy practices to mitigate regulatory risk. Varnum’s Data Privacy and Cybersecurity Practice Team stands ready to help your organization navigate these developments, strengthen your compliance strategies, and stay ahead of regulatory risk.

The Changing Landscape of Dispute Resolution for Website Operators

The Changing Landscape of Dispute Resolution for Website Operators

Historically, many website operators have included a provision in their published website terms and conditions or terms of use (Terms) governing how a website visitor is able to resolve disputes with the website operator. Recently, due to a proliferation of privacy-related litigation relating to the use of tracking technologies embedded on websites, website operators are reconsidering the most effective ways to address the dispute resolution mechanism included in the Terms.

Why Has Arbitration Historically Been the Industry Standard?

Arbitration has long been the preferred method for website operators to handle disputes with website visitors because it offers some key advantages over other mechanisms for dispute resolution, such as mediation and litigation:

  • Efficiency: Arbitration proceedings typically proceed to a resolution faster than court litigation does, which reduces the time and resources spent on resolving disputes.

  • Lower Costs: Arbitration is generally less expensive than litigation because it requires lower legal fees and preparation costs compared to traditional trials, and decisions are generally not appealable.

  • Confidentiality: The private nature of arbitration helps protect companies’ reputations.

  • Control: Companies can select the arbitrator and establish their own rules for the proceedings, which provides a tailored approach to dispute resolution.

What is Causing This Change?

These benefits have made mandatory arbitration provisions a staple in website Terms.  However, the benefits of maintaining mandatory arbitration provisions appear to be waning. This shift can be largely attributed to the rise in tracking-technology litigation, as website operators face privacy-related challenges by plaintiffs under federal laws such as the Electronic Communications Privacy Act (ECPA) and the Video Privacy Protection Act (VPPA), as well as state laws like California’s Invasion of Privacy Act (CIPA). While the use of tracking technologies, such as cookies and pixel tags, are common features of websites, increased complaints about these tools pose a significant challenge for website operators that rely on traditional arbitration provisions.

Mass arbitrations, which are the filing of hundreds or even thousands of individual claims on behalf of website visitors, have created administrative burdens and high costs for website operators, as website operators generally pay for most or all of the arbitration fees. Because of this strategy, website operators are recognizing the need to adapt to the complexity and volume of data-privacy actions by reevaluating their default dispute-resolution strategy. This reevaluation includes revising arbitration clauses to include mechanisms like bellwether processes, where a small number of cases are resolved first in order to guide the resolution of subsequent cases. Notably, at least one court has refused to compel arbitration of a class action against a website operator on the grounds that its arbitration agreement, including the mass-arbitration bellwether provision, was unenforceable because it was “permeated by provisions which are unconscionable and violative of New Jersey public policy.”[1] Another option is to require informal dispute resolution before arbitration, such as mediation. Additionally, some website operators are considering opting out of mandatory arbitration altogether, or at least carving out data-privacy disputes from the general arbitration provision, to allow disputes to be handled by the courts; this might result in more litigated class actions than in the past. 

What Options Do Website Operators Have?

There are several strategic options to address these challenges. For example, website operators may consider revising arbitration clauses, opting out of arbitration altogether, and utilizing alternative dispute-resolution mechanisms. Each of these approaches have distinct pros and cons:

Option
Pros
Cons
Revise arbitration clauses
Streamlines mass claims, potentially reducing administrative burdens and costs with bellwether processes. Alternatively, if privacy-related claims are carved out, this would allow the website operator to maintain the benefits of the arbitration provision while mitigating the risks of mass arbitrations stemming from privacy-related complaints.
Remains costly. The revised clauses may be challenged and struck down in court, limiting their effectiveness.
Opt out of mandatory arbitration
Allows disputes to be handled by the courts. The burdens of dealing with numerous individual claims can be ameliorated with class actions.
Exposes companies to the possibility of larger payouts, increased public/media scrutiny. Heightened legal risks.
Utilize alternative dispute-resolution mechanisms (e.g., mediation[2] or an ombudsman program[3] )
Flexible, less adversarial, and more suitable for resolving certain types of less complex disputes.
May not be suitable for complex cases. Outcomes are somewhat less predictable, in line with their increased informality.

Each option requires careful consideration based on the website operator’s industry, size, customer base, and the legal environment in which it operates.  For example, revising an arbitration clause might involve carving out privacy-related claims from the provision’s scope. Accordingly, it is important for website operators to discuss tailoring strategies with an experienced attorney.

The approach that any individual company might take to tailoring its website Terms is highly dependent on a variety of factors, including its specific industry, operational scale, customer demographics, and the regulatory environment it is subject to and operates within. Varnum’s experienced data privacy team can help your business navigate this changing landscape and assess the risks and benefits of possible approaches. Varnum is well-equipped to assist in that decision-making process, offering comprehensive guidance so you can make informed decisions about dispute resolution strategies, ensuring alignment with both legal requirements and business objectives.

[1] See Achey v. Cellco P’ship, 475 N.J. Super. 446, 450; 293 A.3d 551, 553–554 (App. Div. 2023).

[2] The Program on Negotiation at Harvard Law School’s website explains that the goal of mediation “is for a neutral third party to help disputants come to a consensus on their own. Rather than imposing a solution, a professional mediator works with the conflicting sides to explore the interests underlying their positions.”

[3] An organizational ombuds “operates in a manner to preserve the confidentiality of those seeking services, maintains a neutral/impartial position with respect to the concerns raised, works at an informal level of the organizational system (compared to more formal channels that are available), and is independent of formal organizational structures.”

Is Insurtech a High-Risk Application of AI? 

Is Insurtech a High-Risk Application of AI? 

While there are many AI regulations that may apply to a company operating in the Insurtech space, these laws are not uniform in their obligations. Many of these regulations concentrate on different regulatory constructs, and the company’s focus will drive which obligations apply to it. For example, certain jurisdictions, such as Colorado and the European Union, have enacted AI laws that specifically address “high-risk AI systems” that place heightened burdens on companies deploying AI models that would fit into this categorization.

What is a “High-Risk AI System”?

Although many deployments that are considered a “high-risk AI system” in one jurisdiction may also meet that categorization in another jurisdiction, each regulation technically defines the term quite differently.

Europe’s Artificial Intelligence Act (EU AI Act) takes a gradual, risk-based approach to compliance obligations for in-scope companies. In other words, the higher the risk associated with AI deployment, the more stringent the requirements for the company’s AI use. Under Article 6 of the EU AI Act, an AI system is considered “high risk” if it meets both conditions of subsection (1) [1] of the provision or if it falls within the list of AI systems considered high risk and included as Annex III of the EU AI Act,[2] which includes, AI systems that are dealing with biometric data, used to evaluate the eligibility of natural persons for benefits and services, evaluate creditworthiness, or used for risk assessment and pricing in relation to life or health insurance.

The Colorado Artificial Intelligence Act (CAIA), which takes effect on February 1, 2026, adopts a risk-based approach to AI regulation. The CAIA focuses on the deployment of “high-risk” AI systems that could potentially create “algorithmic discrimination.” Under the CAIA, a “high-risk” AI system is defined as any system that, when deployed, makes—or is a substantial factor in making—a “consequential decision”; namely, a decision that has a material effect on the provision or cost of insurance.

Notably, even proposed AI bills that have not been enacted have considered insurance-related activity to come within the proposed regulatory scope.  For instance, on March 24, 2025, Virginia’s Governor Glenn Youngkin vetoed the state’s proposed High-Risk Artificial Intelligence Developer and Deployer Act (also known as the Virginia AI Bill), which would have applied to developers and deployers of “high-risk” AI systems doing business in Virginia. Compared to the CAIA, the Virginia AI Bill defined “high-risk AI” more narrowly, focusing only on systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. However, even under that failed bill, an AI system would have been considered “high-risk” if it was intended to autonomously make, or be a substantial factor in making, a “consequential decision,” which is a “decision that has a material legal, or similarly significant, effect on the provision or denial to any consumer of—among other things—insurance.

Is Insurtech Considered High Risk?

Both the CAIA and the failed Virginia AI Bill explicitly identify that an AI system making a consequential decision regarding insurance is considered “high-risk,” which certainly creates the impression that there is a trend toward regulating AI use in the Insurtech space as high-risk. However, the inclusion of insurance on the “consequential decision” list of these laws does not definitively mean that all Insurtech leveraging AI will necessarily be considered high-risk under these or future laws. For instance, under the CAIA, an AI system is only high-risk if, when deployed, it “makes or is a substantial factor in making” a consequential decision. Under the failed Virginia AI Bill, the AI system had to be “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.”

Thus, the scope of regulated AI use, which varies from one applicable law to another, must be considered together with the business’s proposed application to get a better sense of the appropriate AI governance in a given case. While there are various use cases that leverage AI in insurance, which could result in consequential decisions that impact an insured, such as those that improve underwriting, fraud detection, and pricing, there are also other internal uses of AI that may not be considered high risk under a given threshold. For example, leveraging AI to assess a strategic approach to marketing insurance or to make the new client onboarding or claims processes more efficient likely doesn’t trigger the consequential decision threshold required to be considered high-risk under CAIA or the failed Virginia AI Bill. Further, even if the AI system is involved in a consequential decision, this alone may not deem it to be high risk, as, for instance, the CAIA requires that the AI system make the consequential decision or be a substantial factor in that consequential decision.

Although the EU AI Act does not expressly label Insurtech as being high-risk, a similar analysis is possible because Annex III of the EU AI Act lists certain AI uses that may be implicated by an AI system deployed in the Insurtech space. For example, an AI system leveraging a model to assess creditworthiness in developing a pricing model in the EU likely triggers the law’s high-risk threshold. Similarly, AI modeling used to assess whether an applicant is eligible for coverage may also trigger a higher risk threshold. Under Article 6(2) of the EU AI Act, even if an AI system fits the categorization promulgated under Annex III, the deployer of the AI system should perform the necessary analysis to assess whether the AI system poses a significant risk of harm to individuals’ health, safety, or fundamental rights, including by materially influencing decision-making. Notably, even if an AI system falls into one of the categories in Annex III, if the deployer determines through documented analysis that the deployment of the AI system does not pose a significant risk of harm, the AI system will not be considered high-risk.

What To Do If You Are Developing or Deploying a “High-Risk AI System”?

Under the CAIA, when dealing with a high-risk AI system, various obligations come into play. These obligations vary for developers[3] and deployers[4] of the AI system. Developers are required to display a disclosure on their website identifying any high-risk AI systems they have deployed and explain how they manage known or reasonably foreseeable risks of algorithmic discrimination. Developers must also notify the Colorado AG and all known deployers of the AI system within 90 days of discovering that the AI system has caused or is reasonably likely to cause algorithmic discrimination. Developers must also make significant additional documentation about the high-risk AI system available to deployers.

Under the CAIA, deployers have different obligations when leveraging a high-risk AI system. First, they must notify consumers when the high-risk AI system will be making, or will play a substantial factor in making, a consequential decision about the consumer. This includes (i) a description of the high-risk AI system and its purpose, (ii) the nature of the consequential decision, (iii) contact information for the deployer, (iv) instructions on how to access the required website disclosures, and (v) information regarding the consumer’s right to opt out of the processing of the consumer’s personal data for profiling. Additionally, when use of the high-risk AI system results in a decision adverse to the consumer, the deployer must disclose to the consumer (i) the reason for the consequential decision, (ii) the degree to which the AI system was involved in the adverse decision, and (iii) the type of data that was used to determine that decision and where that data was obtained from, giving the consumer the opportunity to correct data that was used about that as well as appeal the adverse decision via human review. Developers must also make additional disclosures regarding information and risks associated with the AI system. Given that the failed Virginia AI Bill had proposed similar obligations, it would be reasonable to consider the CAIA as a roadmap for high-risk AI governance considerations in the United States. 

Under Article 8 of the EU AI Act, high-risk AI systems must meet several requirements that tend to be more systemic. These include the implementation, documentation, and maintenance of a risk management system that identifies and analyzes reasonably foreseeable risks the system may pose to health, safety, or fundamental rights, as well as the adoption of appropriate and targeted risk management measures designed to address these identified risks. High-risk AI governance under this law must also include:

  • Validating and testing data sets involved in the development of AI models used in a high-risk AI system to ensure they are sufficiently representative, free of errors, and complete in view of the intended purpose of the AI system;
  • Technical documentation that demonstrates the high-risk AI system complies with the requirements set out in the EU AI Act, to be drawn up before the system goes to market and is regularly maintained;
  • The AI system must allow for the automatic recording of events (logs) over the lifetime of the system;
  • The AI system must be designed and developed in a manner that allows for sufficient transparency. Deployers must be positioned to properly interpret an AI system’s output. The AI system must also include instructions describing the intended purpose of the AI system and the level of accuracy against which the AI system has been tested;
  • High risk AI systems must be developed in a manner that allows for them to be effectively overseen by natural persons when they are in use; and
  • High risk AI systems must deploy appropriate levels of accuracy, robustness, and cybersecurity, which are performed consistently throughout the lifecycle of the AI system.

When deploying high risk AI systems, in-scope companies must carve out the necessary resources to not only assess whether they fall within this categorization, but also to ensure the variety of requirements are adequately considered and implemented prior to deployment of the AI system.

The Insurtech space is growing in parallel with the expanding patchwork of U.S. AI regulations. Prudent growth in the industry requires awareness of the associated legal dynamics, including emerging regulatory concepts nationwide. Varnum’s Data Privacy and Cybersecurity Practice Team continues to monitor these developments and assess their impact on the Insurtech industry to help your business stay one step ahead.  

[1] Subsection (1) states that an AI system is high-risk if it is “intended to be used as a safety component of a product (or is a product) covered by specific EU harmonization legislation listed in Annex I of the AI Act and the same harmonization legislation mandates that he product hat incorporates the AI system as a safety component, or the AI system itself as a stand-alone product, under a third-party conformity assessment before being placed in the EU market.”

[2] Annex 3 of the EU AI Act can be found at https://artificialintelligenceact.eu/annex/3/

[3] Under the CAIA, a “Developer” is a person doing business in Colorado that develops or intentionally and substantially modifies an AI system.

[4] Under the CAIA, a “Deployer” is a persona doing business in Colorado that deploys a High-Risk AI System.

ERISA Fiduciary Duties: Compliance Remains Essential

ERISA Fiduciary Duties: Compliance Remains Essential

The Employee Retirement Income Security Act of 1974 (ERISA) establishes a comprehensive framework of fiduciary duties for many involved with employee benefit plans. Failure to comply with these strict fiduciary standards can expose fiduciaries to personal and professional liability and penalties. With ERISA litigation on the rise, a new administration, and recent news that the Department of Labor (DOL) is sharing data with ERISA-plaintiff firms, a refresher on fiduciary duty compliance is necessary.

What Plans Are Covered?

ERISA’s fiduciary requirements apply to all ERISA-covered employee benefit plans. This generally includes all employer-sponsored group benefit plans unless an exemption applies, such as governmental and church plans, as well as plans solely maintained to comply with workers’ compensation, unemployment compensation, or disability insurance laws.

Who Is A Fiduciary?

A fiduciary is any individual or entity that does any of the following:

  • Exercises authority over the management of a plan or the disposition of assets.
  • Provides investment advice regarding plan assets for a fee.
  • Has any discretionary authority in the administration of the plan.

Note that fiduciary status is determined by function, what duties an individual performs or has the right to perform, rather than an individual’s title or how they are described in a service agreement. Fiduciaries include named fiduciaries. Those specified in the plan documents are plan trustees, plan administrators, investment committee members, investment managers, and other persons or entities that fall under the functional definition. When determining whether a third-party administrator is a fiduciary, it is important to identify whether their administrative functions are solely ministerial or directed or whether the administrator has discretionary authority.

What Rules Must Fiduciaries Follow?

Fiduciaries must understand and follow the four main fiduciary duties:

  • Duty of Loyalty: Known as the exclusive benefit rule, fiduciaries are obligated to discharge their duties solely in the interest of plan participants and beneficiaries. Fiduciaries must act to provide benefits to participants and use plan assets only to pay for benefits and reasonable administrative costs.
  • Duty of Prudence: A fiduciary must act with the same care, skill, prudence, and diligence that a prudent fiduciary would use in similar circumstances. Even when considering experts’ advice, hiring an investment manager, or working with a service provider, a fiduciary must exercise prudence in their selection, evaluation, and monitoring of those functions and providers. This duty extends to procedural policies and plan investment and asset allocation, including evaluation of risk and return.
  • Duty of Diversification: Fiduciaries must diversify plan investments to minimize the risk of large losses, with limited exceptions for ESOPs.
  • Duty to Follow Plan Documents and Applicable Law: Fiduciaries must act in accordance with plan documents and ERISA. Plans must be in writing, and a summary plan description of the key plan terms must be provided to participants.

Fiduciaries also have a duty to avoid causing the plan to engage in any prohibited transactions. Prohibited transactions include most transactions between the plan and individuals and entities with a relationship to the plan. Several exceptions exist, including one that permits ongoing provision of reasonable and necessary services.

Liabilities and Penalties

An individual or entity that breaches fiduciary duties and causes a plan to incur losses may be personally liable for undoing the transaction or making the plan whole. Additional penalties, often at a rate of 20% of the amount involved in the violation, may also apply. While criminal penalties are rare, are possible when violations of ERISA are intentional. Causing the plan to engage in prohibited transactions may also result in excise taxes established by the Internal Revenue Code.

To limit potential liability, plan sponsors and fiduciaries should ensure the appropriate allocation of fiduciary responsibilities, develop adequate plan governance policies, and participate in regular training. Plan sponsors may purchase fiduciary liability insurance to cover liability or losses arising under ERISA. In addition, the DOL has established the Voluntary Fiduciary Correction Program (VFCP), which can provide relief from civil liability and excise taxes if ERISA fiduciaries voluntarily report and correct certain transactions that breach their fiduciary duties. The VFCP program was recently updated with expanded provisions for self-correction of errors, which are addressed in a previous advisory.

Understanding and adhering to the responsibilities outlined under ERISA allows fiduciaries to better serve and protect the financial well-being of participants and beneficiaries. If you have any questions regarding your responsibilities under ERISA or need assistance ensuring your plan policies are consistent with ERISA regulations, please contact a member of Varnum’s Employee Benefits Team.