BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Developers Need Crystal Ball For AI Legislation

Following

Artificial intelligence developers have largely lived under a cloud of uncertainty around how regulatory bodies will influence their work, but for the first time, that cloud has started to clear.

The European Union, an outspoken leader in establishing rules for AI development, recently passed the most significant piece of AI-focused legislation to date. The EU AI Act represents one component of the bloc’s plan to “support the development of trustworthy AI” and clearly define how and where developers can build AI. In a realm of technology that mostly looks like untamed wilderness, the EU has started to build roads that will shape emerging tools and capabilities.

While legislators start to catch up to the developers, health tech companies need to have some ability to predict the future. The pace of legislative action in the U.S. has lagged miles behind the breakneck speed of AI innovation, but developers here are watching the EU closely because it could offer a framework for the U.S. to emulate. For those building solutions for healthcare, we face the highest of stakes – saving human lives – and tight regulatory oversight. We must weigh the guidelines already in place and the legislation slowly moving through government to build solutions that work now and under forthcoming regulations.

This is a view of the current regulatory landscape for AI and the pending legislation that tech developers need to keep an eye on as it moves through Congress.

Rules In Effect Today

The National Coordinator for Health Information Technology (ONC) finalized a new rule in December with a key component for AI developers. The Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule includes a requirement for ONC-certified health IT solutions to build their algorithms for AI solutions transparently.

Some early AI implementations have shown biases, including racial biases, in predictions that could potentially widen the disparity in health outcomes for minority patients. ONC’s ruling aims to mitigate those dangers by giving clinical users “access [to] a consistent, baseline set of information about the algorithms they use” and allow them to identify any issues.

“Transparency” has been the operative word in discussions about AI regulation, including in the Biden administration’s executive order on AI last fall, which calls for developers to share safety test results with the government. AI’s early successes, such as Google’s solution for diagnosing diabetic retinopathy, and its missteps, including Medicare plans denying necessary care, both offer learning opportunities to improve AI’s development and implementation through data sharing.

Medicare has already taken action to improve its use of algorithms in coverage decisions, issuing new guidelines that call for a balance of human influence in the decision-making process. In short, determining coverage from Medicare cannot rely solely on an algorithmic process, it must account for “the individual patient's medical history, the physician’s recommendations, or clinical notes.” Developers for AI in healthcare should keep in mind that the solutions they build have to interface with clinicians in an assistive capacity, not as the final word on patient care decisions. They also need to standardize how they share their test results with regulators, giving a clear and consistent view into the development process.

Legislation To Watch

Senator Ron Wyden, an Oregon Democrat, introduced the Algorithmic Accountability Act last September. It would create a new bureau within the Federal Trade Commission dedicated to taking in impact reports from AI builders and using those reports to build a repository of information about those AI tools.

The bill aims to shed light on where exactly AI is influencing decisions – in medicine and other fields, such as tenant screening for renting a home. It also aims to provide structure to AI reporting without creating an entirely new agency. By working directly with the FTC, AI developers could engender more trust from patients, who have largely shown a distrust of AI in their medical care.

Generative AI, or large language model-based AI, is also sure to attract the attention of regulators for its proclivity for “hallucinations” and inaccuracies. Some companies, including KeyBank, have already limited or banned internal use of generative AI out of concern for data privacy or anticipating restrictions from legislators.

Recent reports suggest Apple and Google are discussing building Google’s Gemini AI platform into the iPhone. This partnership would put Google firmly in command of how consumers leverage Gen AI – very much in the way Google became the de facto search tool for Apple’s broad base of iPhone users. However, the deal will certainly attract attention from the Department of Justice for anti-competitive behavior, just as it did in investigating the search engine partnership between those two companies. How the DOJ rules on each of those partnerships could indelibly shape the consumer relationship with Gen AI.

At this stage, many more questions about AI guidelines exist than answers, but the picture is growing less murky by the day. With the context of some legislation already in place, and an eye toward anticipated rules, developers can still build successful AI-powered tools that positively impact healthcare and stay within the boundaries of federal regulation.

Follow me on Twitter

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.