Contact Me By Email

Saturday, June 17, 2017

An Artificial Intelligence Developed Its Own Non-Human Language - The Atlantic





"A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language.



In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.



In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something."



An Artificial Intelligence Developed Its Own Non-Human Language - The Atlantic

Monday, June 12, 2017

LG Watch Style Review: Not That Stylish

New iPad Pro 10.5 review

2017 MacBook Pro Review

NEW 13" MacBook Pro 2017 Unboxing!

Making Google the Censor - The New York Times

NewImage

 "Prime Minister Theresa May’s political fortunes may be waning in Britain, but her push to make internet companies police their users’ speech is alive and well. In the aftermath of the recent London attacks, Ms. May called platforms like Google and Facebook breeding grounds for terrorism. She has demanded that they build tools to identify and remove extremist content. Leaders of the Group of 7 countries recently suggested the same thing. Germany wants to fine platforms up to 50 million euros if they don’t quickly take down illegal content. And a European Union draft law would make YouTube and other video hosts responsible for ensuring that users never share violent speech.

The fears and frustrations behind these proposals are understandable. But making private companies curtail user expression in important public forums — which is what platforms like Twitter and Facebook have become — is dangerous. The proposed laws would harm free expression and information access for journalists, political dissidents and ordinary users. Policy makers should be candid about these consequences and not pretend that Silicon Valley has silver-bullet technology that can purge the internet of extremist content without taking down important legal speech with it.

Platforms in Europe currently operate notice-and-takedown systems for content that violates the law. Most also prohibit other legal but unwelcome material, like pornography and bullying, under voluntary community guidelines. Sometimes platforms remove too little. More often, research suggests, they remove too much — silencing contested speech rather than risking liability. Accusers exploit this predictable behavior to target expression they don’t like — as the Ecuadorean government has reportedly done with political criticism, the Church of Scientology with religious disputes and disgraced researchers with scholarship debunking their work. Germany’s proposed law increases incentives to err on the side of removal: Any platform that leaves criminal content up for more than 24 hours after being notified about it risks fines as large as 50 million euros.

European politicians tout the proposed laws as curbs on the power of big American internet companies. But the reality is just the opposite. These laws give private companies a role — deciding what information the public can see and share — previously held by national courts and legislators. That is a meaningful loss of national sovereignty and democratic control.

Rufus W. 1 hour ago We all know that 'free speech' has limitations (think: yelling 'fire' in a theatre) - because when 'free speech' leads to death and mayhem... Keith Ferlin 1 hour ago The author makes the best point with Google and Facebooks already wielding enough power without adequate oversight, why would we cede even... Leah 1 hour ago How is it that Facebook and Google are endlessly creative with algorithms that increase their profits, but it is hopelessly impossible to... SEE ALL COMMENTS WRITE A COMMENT Moving this responsibility from state to private actors also eliminates key legal protections for internet users. Private-platform owners are not constrained by the First Amendment or human rights law the way the police or courts are. Users most likely have no remedy if companies are heavy-handed or sloppy in erasing speech. Governments that outsource speech control to private companies can effectively achieve censorship by proxy.

Proposed laws making platforms go beyond notice and takedown to proactively police users’ speech would be even worse than Germany’s draconian takedown proposals. About 300 hours of video are uploaded to YouTube every minute, so reviewing it is not humanly possible. Courts including the European Union Court of Justice and European Court of Human Rights have recognized that users’ speech and privacy rights will suffer if platforms must vet every word they post. And studies suggest that ordinary internet users self-censor when they think they are being surveilled. Researchers found journalists afraid to write about terrorism, Wikipedia users reluctant to learn about Al Qaeda and Google users avoiding searching for sensitive terms in the wake of the Snowden revelations.

Newsletter Sign UpContinue reading the main story Opinion Today Every weekday, get thought-provoking commentary from Op-Ed columnists, The Times editorial board and contributing writers from around the world.

Making Google the Censor - The New York Times:

(Via.)

Amazon Echo Look is fashion-forward thanks to a camera and AI - CNET



Amazon Echo Look is fashion-forward thanks to a camera and AI - CNET

15" Apple MacBook Pro Review (2017, Kaby Lake)