Regulating the internet giants

The world’s most valuable resource is no longer oil, but data.
A new commodity spawns a lucrative, fast-growing industry, prompting antitrust regulators to step in to restrain those who control its flow. A century ago, the resource in question was oil. Now similar concerns are being raised by the giants that deal in data, the oil of the digital era. These titans—Alphabet (Google’s parent company), Amazon, Apple, Facebook and Microsoft—look unstoppable. They are the five most valuable listed firms in the world. Their profits are surging: they collectively racked up over $25bn in net profit in the first quarter of 2017. Amazon captures half of all dollars spent online in America. Google and Facebook accounted for almost all the revenue growth in digital advertising in America last year.

Such dominance has prompted calls for the tech giants to be broken up, as Standard Oil was in the early 20th century. This newspaper has argued against such drastic action in the past. Size alone is not a crime. The giants’ success has benefited consumers. Few want to live without Google’s search engine, Amazon’s one-day delivery or Facebook’s newsfeed. Nor do these firms raise the alarm when standard antitrust tests are applied. Far from gouging consumers, many of their services are free (users pay, in effect, by handing over yet more data). Take account of offline rivals, and their market shares look less worrying. And the emergence of upstarts like Snapchat suggests that new entrants can still make waves.

But there is cause for concern. Internet companies’ control of data gives them enormous power. Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the “data economy”. A new approach is needed.

The Economist → The world’s most valuable resource is no longer oil, but data

Top mistakes data scientists make

The rise of the data scientists continues and the social media is filled with success stories – but what about those who fail? There are no cover articles praising the fails of the many data scientists that don’t live up to the hype and don’t meet the needs of their stakeholders.

The job of the data scientist is solving problems. And some data scientists can’t solve them. They either don’t know how to, or are obsessed about the technology part of the craft and forget what the job is all about. Some get frustrated that “those business people” are asking them to do “simple trivial data tasks” while they’re working on something “really important and complex”. There are many ways a data scientist can fail – here’s a summary of top three mistakes that is a straight path towards failure.

Cyborgus → Top mistakes data scientists make

The High-Speed Trading Behind Your Amazon Purchase

Amazon gave people and companies the ability to sell on Amazon.com in 2000, and it has since grown into a juggernaut, representing 49% of the goods Amazon ships. Amazon doesn't break out numbers for the portion of its business driven by independent sellers, but that translates to tens of billions in revenue a year. Out of more than 2 million registered sellers, 100,000 each sold more than $100,000 in goods in the past year, Peter Faricy, Amazon's vice president in charge of the division that includes outside sellers, said at a conference last week.

It's clear, after talking to sellers and the software companies that empower them, that the biggest of these vendors are growing into sophisticated retailers in their own right. The top few hundred use pricing algorithms to battle with one another for the coveted "Buy Box," which designates the default seller of an item. It's the Amazon equivalent of a No. 1 ranking on Google search, and a tremendous driver of sales.

Dow Jones → The High-Speed Trading Behind Your Amazon Purchase

Netflix’s Grand, Daring, Maybe Crazy Plan to Conquer the World

Netflix is a notoriously data-driven company, and the Daredevil header art test is one of hundreds it will conduct this year. That data trove has also enabled Netflix’s gamble on global expansion, by illuminating one simple fact: People are all different, but not in the ways you’d imagine.

“There’s a mountain of data that we have at our disposal,” says Todd Yellin, Netflix’s VP of product innovation. Netflix has a well-earned reputation for using the information it gleans about its customers to drive everything from the look of the service to the shows in which it invests. “That mountain is composed of two things. Garbage is 99 percent of that mountain. Gold is one percent… . Geography, age, and gender? We put that in the garbage heap. Where you live is not that important.”

Wired → Netflix’s Grand, Daring, Maybe Crazy Plan to Conquer the World

The Big Data Heist

Every day, people give away their data to just a few shareholders using corporate giants as wealth managers. These data funds, such as Google or Facebook, then spin off AI-powered applications, that in turn become privately owned assets. As productive resources, data, AI and their byproducts are about to replace most jobs of the working class, even those of relatively senior executives. Many predict that AI-correlated jobs will only compensate for a tiny bit of those redundancies. This issue of “jobless growth” is a core characteristic of our transition into industry 4.0, and has pushed even Bill Gates to suggest that a tax on AI should be an option.

PersonalData.IO → The Big Data Heist

 

The Arrival of Artificial Intelligence

What is kind of amusing — and telling — is that as John McCarthy, who invented the name “Artificial Intelligence”, noted, the definition of specialized AI is changing all of the time. Specifically, once a task formerly thought to characterize artificial intelligence becomes routine — like the aforementioned chess-playing, or Go, or a myriad of other taken-for-granted computer abilities — we no longer call it artificial intelligence.

Stratechery → The Arrival of Artificial Intelligence

A.I. Versus M.D.

In January, 2015, the computer scientist Sebastian Thrun became fascinated by a conundrum in medical diagnostics. Thrun, who grew up in Germany, is lean, with a shaved head and an air of comic exuberance; he looks like some fantastical fusion of Michel Foucault and Mr. Bean. Formerly a professor at Stanford, where he directed the Artificial Intelligence Lab, Thrun had gone off to start Google X, directing work on self-learning robots and driverless cars. But he found himself drawn to learning devices in medicine. His mother had died of breast cancer when she was forty-nine years old—Thrun’s age now. “Most patients with cancer have no symptoms at first,” Thrun told me. “My mother didn’t. By the time she went to her doctor, her cancer had already metastasized. I became obsessed with the idea of detecting cancer in its earliest stage—at a time when you could still cut it out with a knife. And I kept thinking, Could a machine-learning algorithm help?”

The New Yorker → A.I. Versus M.D.

The death of interactive infographics?

Barely anyone interacts with the New York Times’ graphics. The New York Times makes arguably some of the best interactives in the field, which made Gregor’s talk even more depressing. His number of only 10–15% of people clicking on buttons — even essential ones — tells you that interactives are a waste of time and money.

Startup Grind → The death of interactive infographics?

Machine Learning for Product Managers

I therefore [...] ran a session at Skyscanner that aimed to cover machine learning from a non-technical, product-centric perspective. We first covered definitions, and then moved on to a number of key issues that are important to keep in mind to create successful products that go beyond ‘just’ the ML. This post is a summary of that session.

Hacker Noon → Machine Learning for Product Managers

If Data Visualisation is So Hot, Why Are People Leaving?

There are prominent theorists and practitioners in data visualization that simply do not believe there is such a thing as a dedicated data visualization role in industry. For those critics there is no profession, only a skill used near the end of a long process performed by scientists, analysts and engineers. In contrast, there’s a celebratory data visualization community that gathers for the Information is Beautiful Awards and looks to people like David McCandless as a thought leader. The more serious are in or allied with journalism, the more exotic might call themselves artists, and the freelancers and consulting firms that dominate this area might see themselves as a bit of both. In their case, catching an audience in an attention economy is a prominent requirement of their data visualization work. Somewhere in between these two sides is a growing professional space referred to sometimes as “data visualization product”. It’s occupied by roles with different titles — I, for instance, am a Senior Data Visualization Engineer — who create custom data visualization applications that are more than just the product of industry tools but not quite as hand-crafted as data visualization in journalism or for public communication pieces.

Elijah Meeks → If Data Visualization is So Hot, Why Are People Leaving?

Most Scenic City Routes Mapped Using Photo Data

Tapping into geo-tagging data and the collective wisdom of photographers, you can use this interactive tool to follow in the footsteps of those who have mapped out the most beautiful routes through cities. [...]
Eric Fisher of Mapbox has spent years compiling data from Flickr users, turning their sequential geo-located uploads into paths through urban environments including San Francisco, Beijing, Istanbul and Tokyo.

Urbanist → Geotagger World Atlas: Most Scenic City Routes Mapped Using Photo Data

AI attempt at reducing energy usage

Alphabet's London-based AI outfit DeepMind and the National Grid are in early-stage talks to reduce the UK's power usage purely through neural networks and machine learning—no new infrastructure required.

Demis Hassabis, co-founder and CEO of DeepMind (and lead programmer on Peter Molyneux's Theme Park), hopes that the UK's energy usage could be reduced by as much as 10 percent, just through AI-led optimisation. The UK generated around 330 terrawatt-hours (TWh) of energy in 2014, at a cost of tens of billions of pounds—so a 10 percent reduction could be pretty significant, both in terms of money spent and carbon dioxide produced.

The National Grid, owned by a publicly traded company of the same name, owns and operates the UK's power transmission network—that is, the country's power lines and major substations. The sources of energy—power stations, hydro plants, wind turbines, and a smattering of solar panels—are owned by other big companies (primarily EDF and E.On).

Importantly, though, it is the National Grid's job to balance supply and demand across the network, so that the AC frequency that arrives at your house is always within ±1% of 50Hz. Energy demands are usually quite predictable, in that they closely align with standard human behaviour (waking and sleeping hours) and the weather. Energy supply, however, is much less reliable, especially as the UK adds more wind and solar power to the mix.

Ars Technica UK → DeepMind in talks with National Grid to reduce UK energy use by 10%

Training machines to know when they are wrong

The questions I’m asking here are tuned to inquire how we might arm people with stronger signals about how much we can trust an answer machine’s response.

This approach suggests a certain faith in human beings’ capacity to kick in with critical thinking in the face of ambiguous information. I’d like to be optimistic about this, to believe that we can get people thinking about the facts they receive if we give them the proper prompts.

We’re not in a good place here, though. One study found that only 19% of college faculty can even give a clear explanation of what critical thinking is—let alone teach it. We lean hard on the answer machines and the news-entertainment industrial complex to get the facts that guide our personal and civic decisions, but too many of us are poorly equipped with skills to evaluate those facts.

So the more Google and other answer machines become the authorities of record, the more their imperfect understanding of the world becomes accepted as fact. Designers of all data-driven systems have a responsibility to ask hard questions about proper thresholds of data confidence—and how to communicate ambiguous or tainted information.

How can we make systems that are not only smart enough to know when they’re not smart enough… but smart enough to say so and signal that human judgment has to come into play?

Big Medium → Systems Smart Enough To Know When They're Not Smart Enough

Top 10 Hot Artificial Intelligence Technologies

The market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Forbes → Top 10 Hot Artificial Intelligence (AI) Technologies

Inside Facebook’s AI Machine

“I think that we’ve made the world a much better place,” he says, and offers to tell a story. The day before our interview, Candela made a call to a Facebook connection he had met only once—a father of one of his friends. He had seen that person posting pro-Trump stories, and was perplexed by their thinking. Then Candela realized that his job is to make decisions based on data, and he was missing important information. So he messaged the person and asked for a conversation. The contact agreed, and they spoke by phone. “It didn’t change reality for me, but made me look at things in a very, very different way,” says Candela. “In a non-Facebook world I never would have had that connection.” In other words, though AI is essential — even existential — for Facebook, it’s not the only answer. “The challenge is that AI is really in its infancy still,” says Candela. “We’re only getting started.”

backchannel → Inside Facebook’s AI Machine

Should economists be more concerned about Artificial Intelligence?

This post highlights some of the possible economic implications of the so-called “Fourth Industrial Revolution” — whereby the use of new technologies and artificial intelligence (AI) threatens to transform entire industries and sectors. Some economists have argued that, like past technical change, this will not create large-scale unemployment, as labour gets reallocated. However, many technologists are less optimistic about the employment implications of AI.  In this blog post we argue that the potential for simultaneous and rapid disruption, coupled with the breadth of human functions that AI might replicate, may have profound implications for labour markets.  We conclude that economists should seriously consider the possibility that millions of people may be at risk of unemployment, should these technologies be widely adopted.

Bank Underground → Should economists be more concerned about Artificial Intelligence?

Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse

Elon Musk is famous for his futuristic gambles, but Silicon Valley’s latest rush to embrace artificial intelligence scares him. And he thinks you should be frightened too. Inside his efforts to influence the rapidly advancing field and its proponents, and to save humanity from machine-learning overlords.

Vanity Fair → Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse