DNSS Inc – IT consulting and IT security services for small to mid-size businesses

Managed IT Services, IT Security Services, Vulnerability Management and Remediation, Help Desk Support, IT Maintenance Services, Network Administration, Business Continuity Planning, Disaster Recovery Planning, Onsite and Offsite Data Backup, and Information Security Assessment

An Artificial Intelligence Developed Its Own Non-Human Language

A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language.

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something.

The larger point of the report is that bots can be pretty decent negotiators—they even use strategies like feigning interest in something valueless, so that it can later appear to “compromise” by conceding it. But the detail about language is, as one tech entrepreneur put it, a mind-boggling “sign of what’s to come.”

To be clear, Facebook’s chatty bots aren’t evidence of the singularity’s arrival. Not even close. But they do demonstrate how machines are redefining people’s understanding of so many realms once believed to be exclusively human—like language.

Already, there’s a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge pile of data then examining the output to try to understand how the machine thinks. But the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems.

“There remains much potential for future work,” Facebook’s researchers wrote in their  paper, “particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language.”

Facebook Inc on Thursday offered new insight into its efforts to remove terrorism content, a response to political pressure in Europe to militant groups using the social network for propaganda and recruiting.

Facebook has ramped up use of artificial intelligence such as image matching and language understanding to identify and remove content quickly, Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, counterterrorism policy manager, explained in a blog post.

Facebook uses artificial intelligence for image matching that allows the company to see if a photo or video being uploaded matches a known photo or video from groups it has defined as terrorist, such as Islamic State, Al Qaeda and their affiliates, the company said in the blog post.

YouTube, Facebook, Twitter and Microsoft last year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.

Similarly, Facebook now analyses text that has already been removed for praising or supporting militant organizations to develop text-based signals for such propaganda.

“More than half the accounts we remove for terrorism are accounts we find ourselves, that is something that we want to let our community know so they understand we are really committed to making Facebook a hostile environment for terrorists,” Bickert said in a telephone interview.

Germany, France and Britain, countries where civilians have been killed and wounded in bombings and shootings by Islamist militants in recent years, have pressed Facebook and other social media sites such as Google and Twitter to do more to remove militant content and hate speech.

Government officials have threatened to fine the company and strip the broad legal protections it enjoys against liability for the content posted by its users.

Asked why Facebook was opening up now about policies that it had long declined to discuss, Bickert said recent attacks were naturally starting conversations among people about what they could do to stand up to militancy.

In addition, she said, “we’re talking about this is because we are seeing this technology really start to become an important part of how we try to find this content.”

(Reporting by Julia Fioretti; Editing by Jonathan Weber and Grant McCool)

via An Artificial Intelligence Developed Its Own Non-Human Language

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on June 15, 2017 by in Blog and tagged .
%d bloggers like this: