At U.N., Britain to push internet firms to remove extremist content quicker

UNITED NATIONS (Reuters) – The leaders of Britain, France and Italy will push social media companies on Wednesday to remove “terrorist content” from the internet within one to two hours of it appearing because they say that is the period when most material is spread.

British Prime Minister Theresa May, French President Emmanuel Macron and Italian Prime Minister Paolo Gentiloni will raise the issue at an event on the sidelines of the annual gathering of world leaders at the United Nations.

Twitter Inc, Facebook Inc, Microsoft Corp and Alphabet Inc’s Google are among the companies due to attend, the British U.N. mission said. The European Union has threatened legislation if they do not step up efforts to police what is available on the web.

The British U.N. mission said May will welcome progress, but urge companies to go “further and faster” to stop groups like Islamic State spreading material that promotes extremism or shows how to make bombs or attack pedestrians with vehicles.

“Terrorist groups are aware that links to their propaganda are being removed more quickly, and are placing a greater emphasis on disseminating content at speed in order to stay ahead,” May plans to tell the event.

“Industry needs to go further and faster in automating the detection and removal of terrorist content online, and developing technological solutions which prevent it being uploaded in the first place,” she will say.

Responding to pressure from governments in Europe and the United States after a spate of militant attacks, key firms created the Global Internet Forum to Counter Terrorism in June to share technical solutions for removing extremist content and work more with counter-terrorism experts.

Twitter said it had removed 299,649 accounts in the first half of this year for the “promotion of terrorism”, a 20 percent decline from the previous six months, although it gave no reason for the drop. Three-quarters of those accounts were suspended before posting their first tweet.

May said ahead of Wednesday’s event: “We need a fundamental shift in the scale and nature of our response – both from industry and governments – if we are to match the evolving nature of terrorists’ use of the internet.”

Reporting by Michelle Nichols; Editing by Paul Tait

Our Standards:The Thomson Reuters Trust Principles.

Tech

Related Posts:

Twitter says its controls are weeding out users advocating violence

BRUSSELS (Reuters) – Twitter Inc said that its internal controls were allowing it to weed out accounts being used for the “promotion of terrorism” earlier rather than responding to government requests to close them down.

U.S. and European governments have been pressuring social media companies including Twitter, Facebook Inc and Alphabet Inc’s Google to fight harder against online radicalization, particularly by violent Islamist groups.

Twitter said it had removed 299,649 accounts in the first half of this year for the “promotion of terrorism”, a 20 percent decline from the previous six months, although it gave no reason for the drop. Three-quarters of those accounts were suspended before posting their first tweet.

Britain’s interior minister, Amber Rudd, used a visit to Silicon Valley last month to ask Facebook, Microsoft, Twitter, and YouTube to step up efforts to remove content that incites militants after four attacks in Britain killed 36 people this year.

“Loser terrorists must be dealt with in a much tougher manner. The internet is their main recruitment tool which we must cut off & use better!” U.S. President Donald Trump tweeted on Friday after a bombing on a London commuter train.

Less than 1 percent of account suspensions were due to government requests, Twitter said, while 95 percent were thanks to the company’s internal efforts to combat radical content with “proprietary tools”, up from 74 percent in its last twice-yearly transparency report.

Twitter defines “promotion of terrorism” as actively inciting or promoting violence “associated with internationally recognized terrorist organizations.”

The vast majority of notices from governments concerned “abusive behavior”, which includes violent threats, harassment, hateful conduct and impersonation.

Twitter said it had removed 935,897 accounts for promotion of terrorism between August 1, 2015 and June 30 this year.

The social media platform said in July it had 328 million average monthly active users in the three months to June 30.

The European Union has threatened legislation on Internet firms removing illegal content if they do not step up efforts to police what is available on the web.

Twitter said it had received about 3 percent more legal requests and court orders to remove content posted by users in the first half of this year than during the last six months of 2016.

About 90 percent of those removal requests came from Turkey, Russia, France and Germany.

The transparency report showed Turkey was the most active country in seeking the removal of content, accounting for 45 percent of all requests worldwide.

Twitter said it had received eight requests from governments to take down content posted by journalists and news organizations in the first half of 2017 but did not act on any of them “because of their political and journalistic nature.”

Of the eight, five were court orders or other legal demands from Turkey ordering Twitter to take down content from journalists or news outlets.

Turkey detained tens of thousands of people including scores of journalists after a failed coup in July last year. The crackdown by Turkish President Tayyip Erdogan, who has for years tried to stamp out what he sees as illegal online activity, has strained relations with NATO allies and raised alarms among civil liberties advocates.

Twitter said it filed legal objections to court orders involving Turkish journalists and news outlets wherever possible but none of them had prevailed.

Additional reporting by Dustin Volz in Washington; Editing by Keith Weir and Adrian Croft

Our Standards:The Thomson Reuters Trust Principles.

Tech

Related Posts:

Is A.I. Just Marketing Hype?

Early today, Slate pointed out that breakthrough technologies always seem to be “five to 10 years away,” citing numerous tech forecasts (energy sources, transportation, medical/body-related technologies, etc.) containing that exact phrase.

The also included some quotes predicting breakthroughs in “Robots/A.I.” in “five to 10 years,” but the earliest was from 2006 and rest were from the past two years. The lack of older quotes is probably because with A.I., the big breakthrough–the “singularity” that approximates human intelligence–has a fuzzier threshold.

Here’s are some highlight in the history of A.I. predictions:

  • 1950: Alan Turing predicts a computer will emulate human intelligence (it will be impossible to tell whether you’re texting with a human or a computer) “by the end of the century.”
  • 1970: Life Magazine quotes several distinguished computer scientists saying that “we will have a machine with the general intelligence of a human being” within three to fifteen years.
  • 1983: The huge bestseller The Fifth Generation predicts that Japan will create intelligent machines within ten years.
  • 2002: MIT scientist Rodney Brooks predicts machines will have “emotions, desires, fears, loves, and pride” in 20 years.

Similarly, the futurist Ray Kurzweil has been predicting that the “singularity” will happen in 20 years for at least two decades. His current forecast is that it will happen by 2029. Or maybe 2045. (Apparently he made both predictions at the same conference.)

Meanwhile, we’ve got Elon Musk and Vladmir Putin warning about A.I. Armageddon and invasions of killer robots, and yet… have you noticed that when it comes to actual achievements in A.I., there seems to be far more hype than substance?

Perhaps this is because A.I.–as it exists today–is very old technology. The three techniques for implementing A.I. used today–rule-based machine learning, neural networks and pattern recognition–were invented decades ago.

While those techniques have been refined and “big data” added as a way to increase accuracy (as in predicting the next word you’ll type), the results aren’t particularly spectacular, because there have really been no breakthroughs.

For example, voice recognition is marginally more accurate than 20 years ago in identifying individual spoken words but still lacks any sense of context, which is why, when you’re dictating, inappropriate words always intrude. It’s also why the voice recognition inside voice mail systems is still limited to letters, numbers and a few simple words.

Apple’s SIRI is another example. While it’s cleverly programmed to seem to be interacting, it’s easily fooled and often inaccurate, as evidenced by the wealth of SIRI “fail” videos on YouTube.

Another area where A.I. is supposed to have made big advances is in strategy games. For years, humans consistently beat computers in the Chinese game of GO. No longer. And computers have long been able to defeat human chess champions.

However, while the ability to play a complex game effectively seems like intelligence, such programs are actually quite stupid. For example, here are three chess pieces:

The piece on the left is a Knight (obviously) and the piece in the middle is a Queen (again obviously). The piece on the right is called a “Zaraffa” and it’s used in a Turkish variation of chess. If you look at the Zaraffa carefully and you know how to play regular chess, you immediately know its legal moves.

Deep Blue–or any other chess program–could scan that photo for eternity and not “get” it; far less incorporate a “knight plus queen” way of moving into its gameplay. Game playing programs can’t make mental adjustments that any novice chess player would grasp in a second. They would need to be completely reprogrammed.

Similarly, self-driving cars are also frequently cited as a (potentially job-killing) triumph of A.I. However, the technologies they use–object avoidance, pattern recognition, various forms of radar, etc.–are again decades old.

What’s more, even the most ambitious production implementations of self-driving cars are likely to be limited to freeway driving, the most repetitive and predictable of all driving situations. (While it’s possible self-driving cars may eventually cause fewer accidents than human drivers, that’s because human drivers are so awful.)

The same thing is true of facial recognition. The facial recognition in Apple’s iPhone X is being touted in the press as a huge breakthrough; in fact, the basic technology has been around for decades; what’s new is miniaturizing it so it will fit on a phone.

But what about all those “algorithms” we keep hearing about? Aren’t those A.I.? Well, not really. The dictionary definition of algorithm is “a process or set of rules to be followed in calculations or other problem-solving operations.”

In other words, an algorithm is just a fancy name for the logic inside a computer program. It’s just a reflection of the intent of the programmer. Despite all the sturm-und-drang brouhaha about computers replacing humans, there’s not the slightest indication that any computer program has created, or ever will create, something original.

IBM’s Watson supercomputer is a case in point. Originally touted as an A.I. implementation that was superior to human doctors in diagnosing cancer and prescribing treatment, it’s since become clear that it does nothing of the kind. As STAT recently pointed out:

“Three years after IBM began selling Watson to recommend the best cancer treatments to doctors around the world, a STAT investigation has found that the supercomputer isn’t living up to the lofty expectations IBM created for it. It is still struggling with the basic step of learning about different forms of cancer.”

What’s more, some of Watson’s capabilities are of the “pay no attention to the man behind the curtain” variety. Again from STAT:

“At its heart, Watson for Oncology uses the cloud-based supercomputer to digest massive amounts of data — from doctor’s notes to medical studies to clinical guidelines. But its treatment recommendations are not based on its own insights from these data. Instead, they are based exclusively on training by human overseers, who laboriously feed Watson information about how patients with specific characteristics should be treated.”

Watson, like everything else under the A.I. rubric, doesn’t live up to the hype. But maybe that’s because the point of A.I. isn’t about breakthroughs. It’s about the hype.

Every ten years or so, pundits dust off the “A.I.” buzzword and try to convince the public that there’s something new and worthy of attention in the current implementation of these well-established technologies.

Marketers start attaching the buzzword to their projects to give them a patina of higher-than-thou tech. Indeed, I did so myself in the mid 1980s by positioning an automated text processing system I had built as “A.I.” because it used “rule-based programming.” Nobody objected. Quite the contrary; my paper on the subject was published by the Association for Computing Machinery (ACM).

The periodic return of the A.I. buzzword is always accompanied by bold predictions (like Musk’s killer robots and Kurzweil’s singularity) that never quite come to pass. Machines that can think forever remain “20 years in the future.” Meanwhile,, all we get is SIRI and a fancier version of cruise control. And a boatload of overwrought hand-wringing.

Tech

Related Posts:

Cisco's Executive Chairman Chambers not to seek re-election

(Reuters) – Cisco Systems Inc (CSCO.O) said on Monday that Executive Chairman John Chambers would not seek re-election after his term expires in December.

Chambers, who lead the networking gear maker for two decades as its chief executive, became the executive chairman in July 2015.

Under his leadership, Cisco’s sales surged to about $ 48 billion from $ 1.2 billion in 1995.

Cisco’s CEO Chuck Robbins will become the chairman and Chambers, 68, will be given the honorary title of Chairman Emeritus at the company’s annual shareholder meeting in December.

Reporting by Supantha Mukherjee in Bengaluru; Editing by Sriraj Kalluvila

Our Standards:The Thomson Reuters Trust Principles.

Tech

Related Posts:

  • No Related Posts