The implication is clear: home cooks are being radicalized by the site’s recommendation algorithm to abandon their corned beef in favor of shrapnel-packed homemade bombs. And more ominously, enough people must be buying these bomb parts on Amazon for the algorithm to have noticed the correlations, and begin making its dark suggestions.
But as a few more minutes of clicking would have shown, the only thing Channel 4 has discovered is a hobbyist community of people who mill their own black powder at home, safely and legally, for use in fireworks, model rockets, antique firearms, or to blow up the occasional stump.
If you’re a computer science aficionado, you may be familiar with the travelling salesman problem: given a list of different cities and the distances between them, how can a salesman visit each city once using the shortest possible route and then end up back where he started. This is possibly the most popular example of a complex optimization problem there is. As you add more locations, the problem gets more and more difficult so that for a large number of cities it is impossible to find the single optimal solution in a reasonable amount of time.
I’m skeptical that AI will lead to mass unemployment in the next 50 years (which is about as long a time frame I’m comfortable making any kind of prediction about), and perhaps this is a good moment to lay out why.
Why do so many companies make bad decisions, even with access to unprecedented amounts of data? With stories from Nokia to Netflix to the oracles of ancient Greece, Tricia Wang demystifies big data and identifies its pitfalls, suggesting that we focus instead on "thick data" -- precious, unquantifiable insights from actual people -- to make the right business decisions and thrive in the unknown.
AI is silently reshaping our entire society: our day-to-day work, the products we purchase, the news we read, how we vote, and how governments govern, for example. But as anyone who’s searched endlessly through Netflix without finding anything to watch can attest, AI isn’t perfect. But while it’s easy to pause a movie when Netflix’s algorithm misjudges your tastes, the stakes are much higher when it comes to the algorithms that are used to decide more serious issues, like prison sentences, credit scores, or housing.
These algorithms are often proprietary: We don’t know exactly how they work or how they’re designed. This makes it virtually impossible to audit them, which is why research that digs into how AI is programmed is so crucial. In short, AI’s biases are civil liberty problems, so the partnership between AI Now and the ACLU is a natural one. Together, they hope to become a formidable force in achieving bias-free AI.
In 1977, the great computer scientist Donald Knuth published a paper called The Complexity of Songs, which is basically one long joke about the repetitive lyrics of newfangled music (example quote: "the advent of modern drugs has led to demands for still less memory, and the ultimate improvement of Theorem 1 has consequently just been announced").
I'm going to try to test this hypothesis with data. I'll be analyzing the repetitiveness of a dataset of 15,000 songs that charted on the Billboard Hot 100 between 1958 and 2017.
It’s sometimes argued that the long-term benefits of self-driving cars, such as safer roads, may not be felt with much impact until robotic vehicles account for the majority of traffic on the road. Until that happens, those unpredictable lumps of meat we call humans will continue to exert their own effects on traffic—continuing to cause accidents, for instance. But a new study out of the University of Illinois at Urbana-Champaign suggests that the addition of just a small number of autonomous cars can ease the congestion on our roads.
You’ve likely seen the demonstration of phantom traffic jams where cars drive around in a circle to simulate the impact of a single slowing car on a road full of traffic. One car pumps its brakes for no particular reason, and the slowdown ripples through the traffic. Now, the University of Illinois research, led by Daniel Work, shows that placing even just a single autonomous car into one of those circular traffic simulations can dampen the effects of the phantom traffic jam.
I recently came across a great natural language dataset from Mark Riedel: 112,000 plots of stories downloaded from English language Wikipedia. This includes books, movies, TV episodes, video games- anything that has a Plot section on a Wikipedia page.
This offers a great opportunity to analyze story structure quantitatively. In this post I’ll do a simple analysis, examining what words tend to occur at particular points within a story, including words that characterize the beginning, middle, or end.
At Instacart, we deliver a lot of groceries. By the end of next year, 80% of American households will be able to use Instacart. Our challenge: complete every delivery on-time, with the right groceries as fast as possible.
Over the course of a week, we traverse cities all over the United States many times over while delivering groceries:
How do we bring order to the chaos?
[...] we’ll first introduce the logistics problem Instacart is solving, outline the architecture of our systems and describe the GPS data we collect. Then we will conclude by touring a series of datashader visualizations:
Visualizations like these help us to build intuition about our system, generate hypotheses for improvements, sanity check our changes, identify best practices and improve our operations.
Millions of Italians can now say they own a one-of-a-kind Nutella jar. In February, 7 million jars appeared on shelves in Italy, all of them boasting a unique label design. And here's a weird twist: Every single one of those millions of labels was designed by...an algorithm?
[...] this algorithm's output was millions upon millions of labels for real-life Nutella jars.
Data is essential to us at Airbnb. We characterize data as the voice of our users at scale. Thus, data science plays the role of an interpreter — we use data and statistics to understand our users and translate it to a voice that people or machines can understand. We leverage these quantitative insights, paired together with qualitative insights (e.g. in-person user research) to make the best possible decisions for both the business and our community of hosts and guests.
Today the technology that ran that arcade game permeates every aspect of our lives. We’re here at an emerging technology conference to celebrate it, and find out what exciting things will come next. But like the tail follows the dog, ethical concerns about how technology affects who we are as human beings, and how we live together in society, follow us into this golden future. No matter how fast we run, we can’t shake them.
This year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of “move fast and break things”, some of what we knocked down were the load-bearing walls of our democracy.
I've heard that in the future computerized AIs will become so much smarter than us that they will take all our jobs and resources, and humans will go extinct. Is this true?
That’s the most common question I get whenever I give a talk about AI. The questioners are earnest; their worry stems in part from some experts who are asking themselves the same thing. These folks are some of the smartest people alive today, such as Stephen Hawking, Elon Musk, Max Tegmark, Sam Harris, and Bill Gates, and they believe this scenario very likely could be true. Recently at a conference convened to discuss these AI issues, a panel of nine of the most informed gurus on AI all agreed this superhuman intelligence was inevitable and not far away.
Artificial Intelligence and the fourth industrial revolution has made some considerable progress over the last couple of years. Most of this current progress that is usable has been developed for industry and business purposes, as you’ll see in coming posts. Research institutes and dedicated, specialised companies are working toward the ultimate goal of AI (cracking artificial general intelligence), developing open platforms and the looking into the ethics that follow suit. There are also a good handful of companies working on AI products for consumers
Used by people in over 200 countries and territories around the world, HDX has become the platform the UN, NGOs, governments and humanitarian actors can depend on when coordinating data-driven relief efforts. In fact, the United Nations (along with the New York Times and The Economist) relied on HDX as the common platform for data during the Ebola Crisis.
If you can relate to the stress of rushing to the airport, long security lines leading to crowded terminals, boarding passes and IDs, checking and stowing bags, cramped compartments filled with travel weary passengers you probably consider yourself an experienced airline passenger. However, knowing the ins and outs of flying as a passenger doesn’t give the average person insight into the complicated operations side of airlines today. The intense competition within the airline industry leads to innovation as companies seek to save and make money and increase efficiency, with a recent focus on the advantages big data provides.
Examples of airlines creatively using big data to improve performance abounds. United Airlines shifted focus in 2014 and began using the mantra of “collect, detect and analyze” data and saw a 15 percent year-over-year revenue increase in their online sales after offering customers a tailored, big data driven shopping experience. Delta invested in baggage tracking data and then created a baggage tracking app for customers that has been downloaded over 11 million times. Southwest started using a big data platform tracking their Boeing planes’ fuel usage trends, which is saving the airline millions of dollars annually. Japan Airlines recently launched a data collection system that measures temperature on airplane components with IBM Japan. The idea is to collect enough data to predict technical problems and prevent costly flight cancellations.
We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.
In his “2017 Design in Tech Report,” John Maeda writes that “code is not the only unicorn skill.” According to Maeda, who is the head of computational design and inclusion at Automattic and former VP of design at VC firm Kleiner Perkins, words can be just as powerful as the graphics in which designers normally traffic. “A lot of times designers don’t know that words are important,” he said while presenting the report at SXSW this weekend. “I know a few designers like that–do you know these designers out there? You do know them, right?”
Suppose you enter a dark room in an unknown building. You might panic about monsters that could be lurking in the dark. Or you could just turn on the light, to avoid bumping into furniture. The dark room is the future of artificial intelligence (AI). Unfortunately, many people believe that, as we step into the room, we might run into some evil, ultra-intelligent machines.
AI is going to augment natural human intelligence and enable people to gain the world’s collective expertise while requiring less time and study than what has been required to become an expert in any one thing today. Traditionally in humans, an expert’s mind possesses fewer possibilities for slower growth, while a beginners mind offers many possibilities for rapid growth.