Categories: Technical

To improve AI ethics, Microsoft restricts access to facial recognition capabilities

To improve AI ethics, Microsoft restricts access to facial recognition capabilities

Microsoft has announced that it is updating its AI ethical guidelines and will no longer permit businesses to use its technology to make assumptions about a person’s gender or age using facial recognition.

As part of its new “responsible AI standard”, Microsoft says it wants to “put people and their goals at the center of system design decisions”. According to the corporation, some features may be updated and others will no longer be available for purchase, but high-level principles will result in genuine improvements in practice.

For instance, organizations like Uber employ Microsoft’s Azure Face Service, a facial recognition service, as part of their identity verification procedures. Now, any company that wants to use the service’s facial recognition features will need to actively request for usage to prove they have already incorporated it into their goods. that the features are advantageous to the user and society and adhere to Microsoft’s AI ethics standards.

According to Microsoft, certain of Azure Face’s most contentious capabilities will no longer be available to firms that have been allowed access, and the corporation will be retiring the face analysis technology that takes into account emotional states. It predicts variables such as gender or age.

Microsoft product manager Sarah Bird said, “We have worked with internal and external researchers to understand the constraints and possible advantages of this technology and navigate the tradeoffs. “These attempts generated serious concerns about anonymity, a lack of agreement on the definition of “emotions,” and the inability to generalize the correlation between facial expression and emotional state across use cases, particularly in the case of emotion classification.” “

Microsoft isn’t completely doing away with emotion recognition; the corporation will continue to employ it for its own internal accessibility products like Seeing AI, which aims to audibly describe the world to individuals with visual issues.

Similar restrictions apply to the company’s proprietary neural voice technology, which enables the production of synthetic voices that are very similar to those of their original sources. According to Natasha Crampton, the company’s chief responsible AI officer, “it’s simple to conceive how this may be used to wrongly imitate speakers and trick listeners.”

Earlier this year Microsoft began watermarking its synthetic voices, which featured slight, inaudible changes in the output, which meant the business could know when a recording was generated using its technology. The possibility of hazardous deepfakes increases with the development of neural TTS technology, which cannot tell the difference between synthetic speech and human voices.

Peter Joseph

Recent Posts

Facebook Not Sending Code to Your Phone? Here’s How to Fix It

Facebook Not Sending Code to Your Phone? Here’s How to Fix It Hey, are you…

3 weeks ago

Facebook Settlement Claim 2024 – What You Need to Know

Facebook Settlement Claim 2024 – What You Need to Know Eligibility Criteria: Understand who is…

10 months ago

Ray-Ban’s Meta Smart Glasses: AI-Powered Visual Search Upgrade

Ray-Ban's Meta Smart Glasses: AI-Powered Visual Search Upgrade Ray-Ban's Meta smart glasses are set to…

11 months ago

How to find Social Media accounts by Phone Number

How to find Social Media accounts by Phone Number This article is about to find…

1 year ago

YouTube Streamlines Ad Controls for Creators: All You Need to Know

YouTube Streamlines Ad Controls for Creators: All You Need to Know YouTube is streamlining and…

1 year ago

Apple’s Exciting Unveil: iPhone 15 Lineup and Innovative Apple Watch Series Await

Apple's Exciting Unveil: iPhone 15 Lineup and Innovative Apple Watch Series Await Apple's eagerly awaited…

1 year ago