Technology

AI Executive Order: Companies Must Prove They Can Be Trusted With Technology

On October 30, 2023, POTUS issued a first-of-a-kind executive order on the safe, secure, and trustworthy use of AI. It sets new provisions and guidelines for the safe and secure use of artificial intelligence. 

Like all E.O.s, this is an enforceable law with broad repercussions for businesses that fail to comply.

Background

The AI E.O. provides an opportunity to address concerns that have arisen with new emerging tech, like generative AI. The new law builds upon previous voluntary commitments established by the Whitehouse and key players in AI, including OpenAI, Nvidia, and Google. Previously, fifteen major tech companies signed a commitment to implement volitional AI safety regulations.

Early in the year, hundreds of technology bosses signed a petition to stop AI until laws and regulations were implemented. One of the leaders of this petition, Elon Musk, Saturday launched a new platform with Xai, a chatbot named Grok, as a better alternative to ChatGPT. Even though much needs to be done, Biden’s AI executive order is a step toward safety and equity regarding AI implementation.

Content watermarking

Companies must inform the federal government if they are developing large foundational models that could impact national security. AI companies must also ensure that their AI models are developed ethically, tested sufficiently, and AI-watermarked. The biggest concern is that the biggest AI model companies tend to hide how “ethically” their AI models were developed.   

Watermarking may help solve many AI-related problems, such as misinformation and deep fakes. Nonetheless, businesses that use AI for content creation may face problems with customer trust if all AI content is watermarked.  They may be forced to rehire marketing teams that were laid off due to automation.

AI transparency

AI biases happen when machine learning algorithms produce prejudiced results due to assumptions in training AI models. Following the executive order, which outlaws AI-related bias, businesses will be put to task to explain how their AI-powered processes are geared for fairness across key functions like HR, customer service and marketing.

Businesses must watch out for racial and gender discrimination in AI-powered processes. AI uses that negatively affect people’s ability to participate in society, government, and the economy could lead to lawsuits and reputational damage. Additionally, companies will need to show that they have laid down stringent measures for data protection in all operations that involve the use of AI.

Supporting workers 

Let’s face it. AI has caused significant disruptions in the labor market, with 45 % of developers already feeling the AI skill threat. Biden’s AI executive order encourages companies to upskill their employees to leverage the full benefits of AI and reduce job losses. Further, the law urges companies to streamline the absorption of skilled immigrant workers in AI. It will provide a framework for bringing into the country the brightest brains to help in AI innovation.

The Reactions So Far

Major companies have welcomed the E.O. Microsoft has hailed it as a “major step in AI governance”. Google has declared that “it is ready to engage productively with government agencies to realize AI’s potential.” Adobe said that it’s “pleased to see the Whitehouse create a framework for regulating AI practices that will accelerate AI’s growth.”

“We'll closely evaluate the specifics of the order and ensure that our AI implementations align with any new regulations or guidelines,” says Luke Lintz,  CEO of High Key Enterprises. “As we move into 2024, our priorities for AI implementation will likely focus on enhancing customer experiences, improving automation to streamline processes, and ensuring the responsible and ethical use of AI technology.”

Conclusion

Companies have a significant burden placed on them to prove to customers and governments that they can safely handle AI technology. While this law brings additional compliance and training costs to companies, it also provides opportunities to cultivate customer trust.

AI Executive Order: Companies Must Prove They Can Be Trusted With Technology
Subscribe to our newsletter to get expert insights
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Technology

AI Executive Order: Companies Must Prove They Can Be Trusted With Technology

AI Executive Order: Companies Must Prove They Can Be Trusted With Technology

On October 30, 2023, POTUS issued a first-of-a-kind executive order on the safe, secure, and trustworthy use of AI. It sets new provisions and guidelines for the safe and secure use of artificial intelligence. 

Like all E.O.s, this is an enforceable law with broad repercussions for businesses that fail to comply.

Background

The AI E.O. provides an opportunity to address concerns that have arisen with new emerging tech, like generative AI. The new law builds upon previous voluntary commitments established by the Whitehouse and key players in AI, including OpenAI, Nvidia, and Google. Previously, fifteen major tech companies signed a commitment to implement volitional AI safety regulations.

Early in the year, hundreds of technology bosses signed a petition to stop AI until laws and regulations were implemented. One of the leaders of this petition, Elon Musk, Saturday launched a new platform with Xai, a chatbot named Grok, as a better alternative to ChatGPT. Even though much needs to be done, Biden’s AI executive order is a step toward safety and equity regarding AI implementation.

Content watermarking

Companies must inform the federal government if they are developing large foundational models that could impact national security. AI companies must also ensure that their AI models are developed ethically, tested sufficiently, and AI-watermarked. The biggest concern is that the biggest AI model companies tend to hide how “ethically” their AI models were developed.   

Watermarking may help solve many AI-related problems, such as misinformation and deep fakes. Nonetheless, businesses that use AI for content creation may face problems with customer trust if all AI content is watermarked.  They may be forced to rehire marketing teams that were laid off due to automation.

AI transparency

AI biases happen when machine learning algorithms produce prejudiced results due to assumptions in training AI models. Following the executive order, which outlaws AI-related bias, businesses will be put to task to explain how their AI-powered processes are geared for fairness across key functions like HR, customer service and marketing.

Businesses must watch out for racial and gender discrimination in AI-powered processes. AI uses that negatively affect people’s ability to participate in society, government, and the economy could lead to lawsuits and reputational damage. Additionally, companies will need to show that they have laid down stringent measures for data protection in all operations that involve the use of AI.

Supporting workers 

Let’s face it. AI has caused significant disruptions in the labor market, with 45 % of developers already feeling the AI skill threat. Biden’s AI executive order encourages companies to upskill their employees to leverage the full benefits of AI and reduce job losses. Further, the law urges companies to streamline the absorption of skilled immigrant workers in AI. It will provide a framework for bringing into the country the brightest brains to help in AI innovation.

The Reactions So Far

Major companies have welcomed the E.O. Microsoft has hailed it as a “major step in AI governance”. Google has declared that “it is ready to engage productively with government agencies to realize AI’s potential.” Adobe said that it’s “pleased to see the Whitehouse create a framework for regulating AI practices that will accelerate AI’s growth.”

“We'll closely evaluate the specifics of the order and ensure that our AI implementations align with any new regulations or guidelines,” says Luke Lintz,  CEO of High Key Enterprises. “As we move into 2024, our priorities for AI implementation will likely focus on enhancing customer experiences, improving automation to streamline processes, and ensuring the responsible and ethical use of AI technology.”

Conclusion

Companies have a significant burden placed on them to prove to customers and governments that they can safely handle AI technology. While this law brings additional compliance and training costs to companies, it also provides opportunities to cultivate customer trust.

Subscribe to our newsletter to get expert insights
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Read more about Technology

Would you like to share your expertise with our audience?
write
Write for us
write
Write for us