Edit Content
Sunday, Nov 24, 2024
Edit Content
Reading: Microsoft releases a tool for AI moderation
- Advertisement -

Microsoft releases a tool for AI moderation

Ehabahe Lawani
Ehabahe Lawani 78 Views

According to Azure AI Content Safety, it can identify texts and pictures that are “inappropriate” and rate their seriousness.

On Tuesday, Microsoft released a content moderation tool allegedly capable of recognizing “inappropriate” pictures and text in eight different languages and classifying their seriousness.

This tool, known as Azure AI Content Safety, is “designed to foster safer online environments and communities,” according to Microsoft. Additionally, the software company told TechCrunch in an email that it aims to remove “biased, sexist, racist, hateful, violent, and self-harm content.” Although it is a feature of Microsoft’s Azure OpenAI Service AI dashboard, it may also be used in non-AI systems like social networking sites and online games.

Microsoft lets users “fine-tune” the context filters in Azure AI, and the severity rankings should make it possible for human moderators to quickly audit material in order to prevent some of the more typical problems with AI content moderation. A representative said that “a team of linguistic and fairness experts” had developed the rules for the program, “taking into account cultural [sic], language, and context.”

According to a Microsoft representative who talked to TechCrunch, the new model is “able to understand content and cultural context so much better” than earlier models, which often missed context cues and excessively flagged appropriate stuff. They did, however, recognize that no AI was flawless and “recommend[ed] using a human-in-the-loop to verify results.”

The Azure AI technology is the same as the coding that prevents Microsoft’s AI chatbot Bing from becoming rogue, which may worry early customers of the ChatGPT competition. When Bing first appeared in February, it made headlines for its memorable attempts to persuade journalists to divorce their partners, denials of simple information like the day and year, threats to end mankind, and the rewriting of history. Microsoft restricted Bing conversations to five questions per session and 50 queries per day after receiving a lot of bad press.

Since Azure AI, like other AI content moderation algorithms, depends on human annotators to identify the material it learns from, its creators’ objectivity is the only thing that matters. The business still bans the ability to visually search for not only gorillas but all primates out of concern that its system would mistakenly identify a person, over ten years after Google’s AI image analysis program incorrectly classified photographs of black individuals as gorillas.

Microsoft disbanded its AI ethics and safety team in March despite the fact that the business was spending billions more in its collaboration with the startup that created the chatbot OpenAI and the buzz around massive language models like ChatGPT was at an all-time high. The corporation has increased the number of employees working on artificial general intelligence (AGI), which is the process through which a computer program may develop concepts that weren’t built into it.

Share This Article
- Advertisement -