Hello World Notes:


By Hannah Fry

  1. Under the guidance of husband-and-wife team Edwina Dunn and Clive Humby, and beginning in certain selected stores, Tesco released its brand new Clubcard – a plastic card, the size and shape of a credit card, that customers could present at a check-out when paying for their shopping. The exchange was simple. For each transaction using a Clubcard, the customer would collect points that they could use against future purchases in store, while Tesco would take a record of the sale and associate it with the customer’s name. They discovered that a handful of loyal customers account for a massive amount of their sales. Armed with knowledge, they could get to work nudging their customer’s buying behaviour, by sending out a series of coupons to the Clubcard users in post. High spenders were given vouchers ranging from £3 to £30. Low spenders were sent a smaller incentive of £1 to £10. And the results were staggering. Nearly 70% of the coupons were redeemed, and while in the stores, customers filled up their baskets: people who had Clubcards spent 4 percent more overall than those who didn’t. Clubcard was rolled out to all customers of Tesco and is widely credited with putting the company ahead of its main rival Sainsbury’s, to become the biggest supermarket in the UK. (Pg. 26 & 27)
  2. According to Clive Humby’s book on Tesco, this has now become an informal policy within the company. Whenever something comes up that is just a bit too revealing, they apologise and delete the data. (Pg. 28)
  3. A year after the tool was first introduced, a father of a teenage girl stormed into a Target store in Minneapolis demanding to see the manager. His daughter had been sent some pregnancy coupons in the post and he was outraged that the retailer seemed to be normalising teenage pregnancy. The manager of the store apologised profusely and called the man’s home a few days later to reiterate the company’s regret about the whole affair. But by then, according to a story in the New York Times, the father had an apology of his own to make. (Pg. 29)
  4. A target executive explained, “We found out that as long as a pregnant woman thinks she hasn’t been spied on,she’ll use the coupons. She just assumes that everyone else on her block got the same mailer for diapers and cribs. As long as we don’t spook her, it works.” So Target still has a pregnancy predictor running behind the scenes – as most retailers do now. The only difference is that it will mix in the pregnancy-related coupons with other more generic items so that the customers don’t notice they’ve been targeted. (Pg. 30)
  5. When Heidi Walterhouse lost a much-wanted pregnancy, she unsubscribed from all the weekly emails updating her on her baby’s growth, telling her which fruit the fetus now matched in size. She unsubscribed from all the mailing lists and wish lists she had signed up to in eager anticipation of the brith. But, as she told an audience of developers at a conference in 2018, there was no power on earth that could unsubscribe her from the pregnancy adverts that followed her around the internet. The digital shadow of a pregnancy continued to circulate alone, without the mother or the baby. ‘Nobody who built that system thought of that consequence,’ she explains. (Pg. 34 & 35)
  6. A study from 2015 demonstrated that Google was serving far fewer ads for high-paying executive jobs to women who were surfing the web than to men. And, after one African American Harvard professor learned that Googling her own name returned adverts targeted on people with a criminal record (and as a result was forced to prove to a potential employer that she’d never been in trouble with the police), she began researching the adverts diverted to different ethnic groups. She discovered that searches for ‘black-sounding names’ were disproportionately likely to be linked to adverts containing the word ‘arrest’ (e.g. “Have you been arrested?”) than those with ‘white-sounding names’. (Pg. 35)
  7.  All this work was motivated by how it could be used in advertising. Overall, the team claimed that matching adverts to a person’s character led to 40 percent more clicks and up to 50 per cent more purchases than using generic, unpersonalised ads. For an advertiser, that’s pretty impressive. (Pg. 40 & 41)
  8. If the undercover footage recorded by Channel Four News is to be believed, Cambridge Analytica were also using personality profiles of the electorate to deliver emotionally charged political messages – for example, finding single mothers who score highly on neuroticism and preying on their fear of being attacked in their own homes to persuade them into supporting a pro-gun-lobby message. Commercial advertisers have probably used these techniques extensively, and other political campaigns probably have, too. (Pg. 41 & 42)
  9. But on top of all that, Cambridge Analytica are accused of creating adverts and dressing them up as journalism. According to one whistleblower’s testimony to the Guardian, one of the most effective ads during the campaign was an interactive graphic titled ‘10 inconvenient truths about the Clinton Foundation’. Another whistleblower went further and claimed that the ‘articles’ planted by Cambridge Analytica were often based on demonstrable falsehoods. (Pg. 42)
  10. All of the above is true, but the actual effects are tiny. In the Facebook experiment, users were more likely to post positive messages if they were shielded from negative news. But the difference amounted to less than one-tenth of one percentage point. Likewise, in the targetted adverts example, the makeup sold to introverts was more successful if it took into account the person’s character, but the difference it made was miniscule.

And yet, potentially, in an election those tiny slivers of influence might be all you need to swing the balance. In a population of tens or hundreds or millions, those one-in-a-thousand switches can quickly add up. And when you remember that, as Jamie Bartlett pointed out in a piece for the Spectator, Trump won in Pennsylvania by 44,000 votes out of six million cast, Wisconsin by 22,000, and Michigan by 11,000, perhaps, margins of less than 1 percent might be all you need. (Pg. 43 & 44). 

  1. Machine learning algorithm – Because the algorithm’s predictions are based on the patterns it learns from the data, a random forest is described as a machine-learning algorithm, which comes under the broader umbrella of artificial intelligence. (Pg. 58)
  2. However accurate the results of an algorithm might be, you could argue that using algorithms as a mirror to reflect the real world isn’t always helpful, especially when the mirror is reflecting a present reality that only exists because of centuries of bias. (Pg. 70)
  3. Just as with the algorithm, it’s not necessarily explicit prejudices that are causing these biased outcomes, so much as history repeating itself. (Pg. 71)
  4. When it comes to bail, for instance, you might hope that judges were able to look at the whole case together, carefully balancing all the pros and cons before coming to a decision. But unfortunately, the evidence suggests otherwise. Instead, psychologists have shown that judges are doing nothing more strategic than going through an ordered checklist of warning flags – past convictions, community ties, prosecution’s request – are raised by the defendant’s story, the judge will stop and deny bail. (Pg. 73)
  5. Anchoring Effect – We find it difficult to put numerical value on things, and are much more comfortable making comparisons between values than just coming up with a single value out of the blue. (Pg. 73)
  6. Like those signs in supermarkets that say ‘Limit of 12 cans of soup per customer.’. They aren’t designed to ward off soup fiends from buying up all the stock, as you might think. They exist to subtly manipulate your perception of how many cans of soup you need. One study back in the 1990s showed that precisely such a sign could increase the average sale per customer from 3.3 tins of soup to 7. (Pg. 73)
  7. By now, you won’t be surprised to learn that judges are also susceptible to the anchoring effect. They’re more likely to award higher damages if the prosecution demands a high amount, and hand down a longer sentence if the prosecutor requests a harsher punishment. 
  8. Humans, as a species, could stand to benefit from having our medical records opening up to algorithms. Watson doesn’t have to remain a far fantasy. But to turn it into reality, we’ll need to hand over our records to companies rich enough to drag us through the slog of the challenges that lie between us and the magical electronic door. And in giving up our privacy we’ll always be dancing with the danger that our records could be compromised, stolen or used against us. Are you prepared to take that risk? Do you believe in these algorithms and their benefits enough to sacrifice your privacy? (Pg. 106 & 107)
  9. Academics, pharmaceutical companies and nonprofits around the world are queuing up to partner with 23andMe to hunt for patterns in their data – both with and without the help of algorithms – in the hope of answering big questions that affect us all. (Pg. 108)
  10. The dataset is also valuable in a much more literal sense. Although the research being done offers an immense benefit to society, 23andMe isn’t doing this out of the goodness of its heart. If you give it consent (and 80 percent of consumers do), it will sell an anonymised version of your genetic data to those aforementioned research partners for a tidy profit. (Pg. 109)
  11. 23andMe board member – ‘The long game here is not to make money selling kits, although the kits are essential to get the base data.’ Something worth remembering whenever you send off for a commercial genetic report: you’re not using the product; you are the product. (Pg. 109)
  12. No one can make you take a DNA test if you don’t want to, but in the US, insurers can ask you if you’ve already taken a test that calculates your risk of developing a particular disease such as Parkinson’s Alzheimer’s or breast cancer and deny you life insurance if the answer isn’t to your liking. And in the UK, insurers are allowed to take the results of genetic tests for Huntington’s disease into account (if the cover is over £500,000). You could try lying, of course, and pretend you’ve never had the test, but doing so would invalidate your policy. The only way to avoid this kind of discrimination is never to have the test in the first place. (Pg. 109 and 110)
  13. Although criminologists had known about the flag and the boost for a few years by this point, in searching through the patterns in the data he UCLA group became the first to realise that the mathematical equations which so beautifully predicted the risk of seismic shocks and aftershocks could also be used to predict crimes and ‘aftercrimes’. And it didn’t just work for burglary. Here was a way to forecast everything from car theft to vandalism. (152) how two seemingly random things are connected – all professionals are creatives. 
  14. Problem was, one UK study calculated, a police officer patrolling their randomly assigned beat on foot could expect to come within a hundred yards of a burglary just once in every eight years. (Pg. 155) so our efforts should not be completely divided, things always need to be strategic. 
  15. As it turns out, we have a complicated relationship with novelty. On average, the higher the novelty score a film had, the better it did at the box office. But only up to a point. Push past that novelty threshold and there’s a precipice waiting; the revenue earned by a film fell off a cliff for anything that scored over 0.8. Sameet Sreenivasan’s study showed what social scientists had long suspected: we’re put off by the banal, but also hate the radically unfamiliar. The very best films sit in a narrow sweet spot between ‘new’ and ‘not too new’. Connect to the newness we were trying to bring in the Pentawards project. We could have done better if we’d stuck to one new thing and experimented most with it.  (Pg. 182)
  16. Wikipedia edits used to form connections with the popularity of a film (Pg. 182)
  17. If you can’t use popularity to tell you what’s ‘good’ then how can you measure quality? (Pg. 184) based on attempts to predict a film’s success by measuring popularity 
  18.  This is important: if we want algorithms to have any kind of autonomy within the arts – either to create new works, or to five us meaningful insights into the art we create ourselves – we’re going to need some kind of measure of quality to go on. There has to be an objective way to point the algorithm in the right direction, a ‘ground truth’ that it can refer back to. Like an art analogy of ‘this cluster of cells is cancerous.’ or ‘the defendant went on to commit a crime’. Without it, making progress is tricky. We can’t design an algorithm to compose or find a ‘good’ song if we can’ define what we mean by ‘good’. 

Unfortunately, in trying to find an objective measure of quality, we come up against a deeply contentious philosophical question that dates back as far as Plato. One that has been the subject of debate for more than two millennia. How do you judge the aesthetic value of art?

Some philosophers – like Gottfried Leibniz – argues that if there are objects that we can all agree on as beautiful, say Michalengelo’s David or Mozart’s Lacrimosa, then there should be some definable, measurable, essence of beauty that makes one piece of art objectively better than another. 

But on the other hand, it’s rather rare for everyone to agree. Other philosophers, such as David Hume, argue that beauty lies in the eye of the beholder. Consider the work of Andy Warhol, for instance, which offers a powerful aesthetic experience to some, while others find it artistically indistinguishable from a tin of soup. 

Others still, Immanuel Kant among them, have said the truth is something in between. That our judgements of beauty are not wholly subjective, nor can they be entirely objective. They are sensory, emotional, intellectual all at once – and crucially, can change over time depending on the state of mind of the observer. 

There is certainly some evidence to support this idea. Fans of Banksy might remember how he set up a stall in Central Park, New York, in 2013, anonymously selling original black and white spray painted canvases for $60 each. The stall was tucked away in a row of others selling the usual touristy stuff, so the price tag must have seemed expensive to those passing by. It was several hours before someone decided to buy one. In total, the day’s takings were $420. But a year later, in an action house in London, another buyer would deem the aesthetic value of the very same artwork great enough to tempt them to spend £68,000 on a single canvas. (Pg. 184, 185) how all questions lead to philosophy in a book that is about algorithms; popularity and art and value; could branding perhaps influence this value war between art and it’s perception?

  1. My favourite example comes from an experiment conducted by the Washington Post in 2007. The paper asked the internationally renowned violinist Joshua Bell to add an extra concert to his schedule of sold-out symphony halls. Armed with his $3.5 million Stradivarius violin, Bell pitched up at the top of an escalator in a metro station in Washington DC during morning rush hour, put a hat on the ground to collect donations and performed for 43 minutes. As the Washington Post put it, here was one of the ‘finest classical musicians in the world, playing some of the most elegant  music ever written on one of the most valuable violins ever made.’.  The result? Seven people stopped to listen for a while. Over a thousand more walked straight past. By the end of his performance, Bell had collected a measly $32.17 in his hat. (Pg. 185 & 186) it shows the value and perception disparity; could branding have changed that?
  2. Can an algorithm be creative if it’s only sense of art is what happened in the past? (Pg. 188)
  3. The algorithms are undoubtedly great imitators, just not very good innovators. (Pg. 192)
  4. But there is certainly an argument that much of human creativity – like the products of the ‘composing’ algorithms – is just a novel combination of pre-existing ideas. (Pg. 193)
  5. Cope, meanwhile, has a very simple definition for creativity, which easily encapsulates what the algorithms can do: ‘Creativity is just finding an association between two things which ordinarily would not seem related.’. (Pg. 193)
  6. Perhaps. But I can’t help feeling that if EMI and algorithms like it are exhibiting creativity, then it’s a rather feeble form. Their music might be beautiful, but it’s not profound. And try as I might, I can’t quite shake the feeling that seeing the output of these machines as art leaves us with a rather culturally impoverished view of the world. It;s cultural comfort food, maybe. But not art with a capital A. (Pg. 193)
  7. Among all the staggeringly impressive, mind-boggling things that data and statistics can tell me, how it feels to be human isn’t one of them. (Pg. 195)
  8. Imagine that, rather than exclusively focusing our attention on designing our algorithms to adhere to some impossible standard of perfect fairness, we instead designed them to facilitate redress when they inevitably erred; that we put as much time and effort into ensuring that automatic systems were as easy to challenge as they are to implement. Perhaps the answer is to build algorithms to be contestable from the group up. Imagine that we designed them to support humans in their decisions, rather than instruct them. To be transparent about why they came to a particular decision, rather than just inform us of the result. (Pg. 201)
  9. This is the future I’m hoping for. One where the arrogant, dictatorial algorithms that fill many of these pages are a thing of the past. One where we stop seeing machines as objective masters and start treating them as we would any other source of power. By questioning their decisions; scrutinising their motives; acknowledging our emotions’ demanding to know who stands to benefit; holding them accountable for their mistakes; and refusing to become complacent. I think this is the key to a future where the net overall effect of algorithms is a positive force for society. And it’s only right that it’s a job that rests squarely on our shoulders. Because one thing is for sure. In the age of the algorithm, humans have never been more important. (Pg. 202 & 203)

Bringing together alcohol and non-alcohol drinkers by inverting the shape of the hexagonal Roku bottle and giving equal visibility to both versions on pub shelves. 


Leave a Reply

Your email address will not be published. Required fields are marked *