Home » Uncategorized (Page 2)

Category Archives: Uncategorized

Teaching robotics, how economists can learn from Machine Learning & other reads

RoubiniWhat Economists Can Learn from Machine Learning

Susan Athey of Stanford University and the NBER discusses the benefits to economics of using machine learning methodology “Machine Learning inspires us to be both systematic and pragmatic.” (The National Bureau of Economic Research)

The gains from technology must be channelled to a broader base of the population

In the years ahead, technological improvements in robotics and automation will boost productivity and efficiency, implying significant economic gains for companies. But, unless the proper policies to nurture job growth are put in place, it remains uncertain whether demand for labour will continue to grow as technology marches forward. (Nouriel Roubini on Economia)

Robot farmer

Agbotic is making an automatized mini-farm run by a robot. The 15,000-square-foot greenhouse’s robot isn’t modeled after a human in anyway.

Startup Agbotic has already designed automated lawn tractors and other projects. The company says the $350,000 robot greenhouses can let farmers easily grow organic vegetables. (Watertown Daily Times)

How to teach … robotics

Ahead of the new school teaching term The Guardian provides a snazzy ‘lesson’ on teaching robotics aimed at inspiring future engineers and computer scientists. (The Guardian)

Economic historian Nathan Rosenberg passed away

I was sad to learn about the passing of my former Professor, Nathan Rosenberg. Professor Rosenberg was one of the wisest economic historians on technological change and the impact of innovation. In addition to his widely read books and papers on economic history he also pioneered the research on uncertainty in innovation and technological change. His work focused on how technological innovation has shaped and been shaped by science, industry, and economics in the twentieth century. (RIP Professor Rosenberg)

Like cholesterol, robots come in two varieties: the good and the bad

robot soldierRobotics may be getting a lot of headlines today – but how do the stories compare to the past. Here’s a list of 5 fascinating reads in robotics from 2008 – after all 2008 was the year of Wall-E and the release of Boston Dynamics first Big Dog video.

English village to be invaded in robot competition

The UK Ministry of Defence’s (MoD) Grand Challenge is designed to boost development of teams of small robots able to scout out hidden dangers in hostile urban areas.

Over 10 days in August, 11 teams of robots will compete to locate and identify four different threats hidden around a mock East German village used for urban warfare training, at Copehill Down, Wiltshire.

The robots must find snipers, armed vehicles, armed foot soldiers, and improvised explosive devices hidden around the village, and relay a real-time picture of what is happening back to a command post. (New Scientist April 2008)

Check out those drones! As an aside I wonder if that mock East German village built for urban warfare training during the Cold War is still ‘active.’

Social tension between humans and robots

Mighty Atom was the size of a ten-year-old boy, more or less, but had a 100,000 horsepower atomic energy heart, an electronic brain, search light eyes, super-sensitive hearing, rockets in his legs, ray guns in his fingers, and a pair of machine guns in his posterior. He attended primary school, where he was often teased for being a robot… Since robots could not harm humans – that’s their nature – he had no choice but to put up with it. (The Valve November 2008)

Monkey’s Thoughts Propel Robot, a Step That May Help Humans

It was the first time that brain signals had been used to make a robot walk… “The robot, called CB for Computational Brain, has the same range of motion as a human. It can dance, squat, point and “feel” the ground with sensors embedded in its feet, and it will not fall over when shoved.” (The New York Times January 2008)

Your future with robots – Rise of the Robots

“By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard. The machines will be capable of carrying out simple chores, such as vacuuming, dusting, delivering packages and taking out the garbage. By 2040, I believe, we will finally achieve the original goal of robotics and a thematic mainstay of science fiction: a freely moving machine with the intellectual capabilities of a human being.” (Scientific American January 2008)

US war robots in Iraq ‘turned guns’ on human comrades

Rogue robots on the loose — Ground-crawling US war robots armed with machine guns, deployed to fight in Iraq (in 2007), reportedly turned on their fleshy masters. The rebellious machine warriors have been retired from combat pending upgrades. (The Register April 2008)

Finally, if you have not seen Alex Rivera’s 2008, social and political sci-fi movie, Sleep Dealer I recommend it. See the trailer here (or below) – “We build your skyscrapers and harvest your crops – let our robotics do your dirty work.”

Picture – The San Antonio Light, 16 October 1928, the headline reads: “Steel Soldiers May Do Mankind’s Fighting.”

Five mid-week reads in behavioural science, machine learning and robotics

Five mid-week reads in behavioural science, machine learning and robotics to stay up dated on the robot economy.

  1. Humans define the goals, technology implements the goals – A wide ranging interview with Stephen Wolfram on Artificial Intelligence and the future. “I think the issue is, as you look to the future, and you say, “Well, what will the future humans …?” where there’s been much more automation that’s been achieved than in today’s world—and we’ve already got plenty of automation, but vastly more will be achieved. And many professions which right now require endless human effort, those will be basically completely automated, and at some point, whatever humans choose to do, the machines will successfully do for them. And then the question is, so then what happens? What do people intrinsically want to do? What will be the evolution of human purposes, human goals?” (GigaOm)
  2. Chinese factory replaces 90% of humans with robots, production soars – There are still people working at the factory, though. Three workers check and monitor each production line and there are other employees who monitor a computer control system. Previously, there were 650 employees at the factory. With the new robots, there’s now only 60. (TechRepublic)
  3. Sex with robots will be ‘the norm’ in 50 years – An expert on the psychology of sex has claimed that she expects having sex with robots to be socially acceptable by 2070 (The Independent)
  4. Cheaper Robots, Fewer Workers – A NY Times Bits video series, called Robotica, examining how robots are poised to change the way we do business and conduct our daily lives. (The New York Times)
  5. 10 lessons in Reinforcement Learning from Google’s DeepMind – A very good series of videos on Reinforcement Learning, by David Silver from Google’s DeepMind:

What are you reading?

International Robotics Challenge Announced


With much wisdom and foresight Edward Teller, in his 1975 “Energy: A Plan for Action” envisions a world where highly “skilled scientist-technicians” are surrounded by an army of “craftsmen” who monitor, develop, and control the automated production processes with computer networks, indicating:

No matter what popular opinion asks us to believe, technology will be crucial for human survival. Contrary to much of our current thinking, technology and its development is not antithetical to human values. Indeed, quite the opposite is true. Tool making and the social organization it implies are very deeply ingrained in our natures. This is, in fact, the primary attribute that distinguishes man from other animals.

We must continue to adapt our technology, which is, in essence, our ability to shape nature more effectively in order to face the problems that this human race faces today. It is for this reason that the development and expansion of technical education is so important. It is only through the possession of high skills and the development of educational systems for the acquisition of these skills that human prosperity can be insured.

In what seems recognition of the fundamentals of Teller’s statement, Khalifa University in Abu Dhabi, the capital of the United Arab Emirates, announced a new robotics challenge as part of the UAE’s year of innovation.

The international robotics challenge under the patronage of His Highness General Sheikh Mohamed Bin Zayed Al-Nahyan, Crown Prince of Abu Dhabi, will be held every two years and offers prizes worth a total of USD 5 million, with the first challenge being held in November, 2016.

Each group of finalists will receive US$500,000, with the winning team receiving US$2 million dollars.

In the outline of the challenge the organizers understand that ‘Robotics technology is poised to fuel a broad array of next-generation products and applications across a diverse range of fields.’

The aim of the international robotics challenge according to Dr. Mohammed Al Mualla, Senior Vice President of Research and Development at Khalifa University is:

to shape the future of worldwide robotic technology and its uses by offering a challenge that requires conducting research, inventing new solutions and applying them to a real life scenario.

The first challenge in November next year will focus on land and aerial robots that can assess situations and work together in emergencies, which the website of the challenge describes as taking:

place at an arena that simulates the scene of an accident and involves a large moving vehicle being on fire. The competing teams will design a set of Unmanned Aerial Vehicles (UAV)’s and Unmanned Ground Vehicles (UGV)’s, which will work autonomously without any human intervention to handle this incident. The challenge involves performing a set of complex tasks, such as landing UAV’s on the moving vehicle’s rooftop and the activating the emergency brakes system that the vehicle is equipped with. It then coordinates with the UGV’s to move towards the burning vehicle and activate the fire extinguishing system by avoiding a lot of obstacles that will be deployed in the field. Air and ground robots will then cooperate to locate the victims and conduct operations to transport them out of the scene.

The challenge is also aiming to boost the robotics ecosystem in the UAE and help local industry as well as play a major part in attracting robotics talent and students to UAE universities. Dr Arif Al Hammadi, executive vice president at the Khalifa university said: “We want to for every university in the UAE to eventually have a robotics lab for its students.”

A call for proposals will begin in May, with submissions due in September 2015. Finalists will be chosen in November this year and the challenge will take place in the UAE in November next year.

The organizers indicate that the overall objective of this challenge is to advance the state of the robotics industry and to build better-designed robots. Because the challenge is performance based, teams from around the world will demonstrate their abilities to produce advanced robots in a highly competitive team based environment.

Additional information about the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) is available on the challenge website.

Photo credit the MBZIRC website.

Nick Bostrom’s Superintelligence and the Metaphorical A.I. Time Bomb

Superintelligence book coverFrank Knight was an idiosyncratic economist who formalized a distinction between risk and uncertainty in his 1921 book, Risk, Uncertainty, and Profit. As Knight saw it, an ever-changing world brings new opportunities, but also means we have imperfect knowledge of future events. According to Knight, risk applies to situations where we do not know the outcome of a given situation, but can accurately measure the odds. Uncertainty, on the other hand, applies to situations where we cannot know all the information we need in order to set accurate odds in the first place.

“There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight wrote. A known risk is “easily converted into an effective certainty,” while “true uncertainty,” as Knight called it, is “not susceptible to measurement.”

Sometimes, due to uncertainty, we react too little or too late, but sometimes we overreact. This was perhaps the case of the Millennium Bug (Millennium time bomb) or the 2009 swine flu, a pandemic that never was. Are we perhaps so afraid of epidemics, a legacy from a not so distant past, that we sometimes overreact? Metaphorical ‘time bombs’ don’t explode. 
This follows from the opinion that time bombs are all based on false ceteris paribus assumptions.

Artificial Intelligence may be one of the areas where we overreact. A new book by Oxford Martin’s Nick Bostrom, SuperIntelligence, Paths, Dangers, Strategies, on Artificial intelligence as an existential risk has been in the headlines since Elon Musk, the high-profile CEO of electric car maker Tesla Motors and CEO and co-founder of SpaceX, said in an interview at an MIT symposium that AI is nothing short of a threat to humanity. “With artificial intelligence, we are summoning the demon.” This was on top of an earlier tweet by Musk where he said he had been reading SuperIntelligence and A.I. is “possibly a bigger threat than nukes.” Note: Elon Musk was one of the people that Nick Bostrom thanks in the introduction to his book as a ‘contributor through discussion.’

Perhaps Elon was thinking of Blake’s The Book of Urizen when he described A.I. as ‘summoning the demon’:

Lo, a shadow of horror is risen, In Eternity!  Unknown, unprolific!. Self-closd, all-repelling: what Demon.  Hath form’d this abominable void.  This soul-shudd’ring vacuum? — Some said: “It is Artificial Intelligence (Urizen),” But unknown, abstracted: Brooding secret, the dark power hid.

Professor Stephen Hawking and Stuart Russell (Russell is the co-author along with Peter Norvig of the seminal book on A.I.) have also expressed their reservations about the risks of A.I. indicating its invention “might” be our last “unless we learn how to avoid the risks.”

Hawking and his co-authors were also keen to point out the “incalculable benefits.” of A.I.

The potential benefits are huge; everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that A.I. may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

In 1951, Alan Turing spoke of machines outstripping humans intellectually:

“Once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

Leading A.I. Researcher Yann le Cunn commenting on Elon Musk’s recent claim that “AI could be our biggest existential threat,” wrote:

Regarding Elon’s comment: AI is a potentially powerful technology. Every powerful technology can be used for good things (like curing disease, improving road safety, discovering new drugs and treatments, connecting people….) and for bad things (like killing people or spying on them). Like any powerful technology, it must be handled with care. There should be laws and treaties to prevent its misuse. But the dangers of AI robots taking over the world and killing us all is both extremely unlikely and very far in the future.

So what is SuperIntelligence?

Stuart Russell and Peter Norvig in their much cited book, Artificial Intelligence: A Modern Approach consider A.I. to address thought processes and reasoning, as well as behavior, they then subdivide their definition of A.I. into four categories: ‘thinking humanly,’ ‘acting humanly,’ ‘thinking rationally’ and ‘acting rationally.’

In Superintelligence Nick Bostrom says it is:

“Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”

Bostrom has taken this further and has previously defined superintelligence as follows:

“By a ‘superintelligence’ we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

He also indicates that a “human-level artificial intelligence would probably have learning, uncertainty, and concept formation as central features.”

What will this Superintelligence do according to Bostrom?

For a good review of Superintelligence see Ethical Guidelines for A Superintelligence by Ernest Davis (Pdf link above) who writes of Bostrom’s thesis:

“The AI will attain a level of intelligence immensely greater than human. There is then a serious danger that the AI will achieve total dominance of earthly society, and bring about nightmarish, apocalyptic changes in human life. Bostrom describes various horrible scenarios and the paths that would lead to them in grisly detail. He expects that the AI might well then turn to large scale interstellar travel and colonize the galaxy and beyond. He argues, therefore, that ensuring that this does not happen must be a top priority for mankind.”

The Bill Joy Effect

Bill Joy wrote a widely quoted article in Wired magazine in April 2000, with the fear filing title: Why the future doesn’t need us, where he warned:

“If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines.”

Eminent researchers John Seely Brown and Paul Duguid, offered a strong argument to Joy’s pessimistic piece in their paper, A Response to Bill Joy and the Doom-and- Gloom Technofuturists, where they compared the concern’s over A.I. and other technologies to the nuclear weapons crisis and the strong societal controls that were put in place to ‘control’ the risks of nuclear weapons. One of their arguments was that society at large has such a significant vested interest in existential risks it works to mitigate them.

Seely Brown and Duguid indicated that too often people have: “technological tunnel vision, they have trouble bringing other forces into view.” This may be a case in point with Bostrom’s Superintelligence, where people who have worked closely with him, have indicated that there are ‘probably’ only 5 “computer scientists in the world currently working on how to programme the super-smart machines of the not-too-distant future to make sure A.I. remains friendly.” In his book presentation Authors@Google Bostrom claimed that only half a dozen scientists are working full time on the control problem worldwide (last 6 minutes). Which sounds like “technological tunnel vision,” and someone who has “trouble bringing other forces into view.”

Tunnel vision A.I. Bias

Nicholas Taleb warns us to beware of confirmation bias. We focus on the seen and the easy to imagine and use them to confirm our theories while ignoring the unseen. If we had a big blacked out bowl with 999 red balls and 1 black one, for example, our knowledge about the presence of red balls grows each time we take out a red ball. But our knowledge of the absence of black balls grows more slowly.

This is Taleb’s key insight in his book Fooled by Randomness, and it has profound implications. A theory which states that all balls are red will likely be ‘corroborated’ with each observation. Our confidence that all balls are red will increase. Yet the probability that the next ball will be black will be rising all the time. If something hasn’t happened before or hasn’t happened for some time we assume that it can’t happen (hence the‚ this time it’s different syndrome). But we know that it can happen. Worse, we know that eventually it will.

In every tool we create, an idea is embedded that goes beyond the function of the thing itself. Just like the human brain, every technology has an inherent bias. It has within its physical form a predisposition toward being used in certain ways and not others.

It may be this bias that caused Professor Sendhil Mullainathan, whilst commenting on the Myth of A.I., to say he is:

“More afraid of machine stupidity than of machine intelligence.”

Bostrom is highly familiar with human bias having written Anthropic Bias, a book that since its first publication in 2002 has achieved the status of a classic.

A.I. Black Swan

In 2002, Nick Bostrom wrote of A.I. and SuperIntelligence Existential Risks:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question. [“Existential Risks”, 2002]

With my behavioral economics hat on I know that the evidence that we can’t forecast is overwhelming, however we must also always plan for and do our best to mitigate risks or ‘black swan’ events as best we can… it appears that the artificial intelligence community are doing a pretty good job of that.

MIT has an entire division, the Engineering Systems Division, that brings together researchers from engineering, the hard sciences, and the social sciences to identify and solve problems of complex engineered systems. One promising technique for engineering complex systems is known as axiomatic design, an approach conceived by Nam Suh, the former head of MIT’s Department of Mechanical Engineering. The idea of axiomatic design is to minimize the information content of the engineered system while maintaining its ability to carry out its functional requirements. Properly applied, axiomatic design results in airplanes, software, and toasters all just complex enough, and no more, to attain their design goals. Axiomatic design minimizes the effective complexity of the engineered system while maintaining the system’s effectiveness.

Professor Joanna Bryson has a trove of good information and research papers showing some of the efforts researchers are taking when it comes to mitigating A.I. risks.

The UK Government’s Chief Science Officer is addressing A.I. risk and what will be needed to govern such risk. The Association for the Advancement of Artificial Intelligence (AAAI) has a panel of leading A.I. researchers addressing the impact and influences of A.I on society. There are many others.


Of course I am aware that one counterintuitive result of a computer’s or A.I.’s fundamentally logical operation is that its future behavior is intrinsically unpredictable. However, I have a hard time believing an A.I. will want to destroy humanity and as much as I take the long-term risk of A.I. seriously I doubt it (Superintelligent A.I.) will happen in 5 or 10 years. We’re still not a paperless society. I can’t see a programmer, or mad scientist for that matter, capable of inventing a super intelligent A.I. programming it with: “Your mission, should you choose to accept it, is to eliminate all humans, wherever they may rear their head.”

I have gleamed many good insights from reading Superintelligence and recommend Bostrom’s book. I do not think human ingenuity will merely allow us to become lumbering robots, survival-machines entirely controlled by these super-machines. There is still something about being wiped out by a superintelligent A.I. that’s like squaring the circle. It doesn’t quite add up.

Investments in robots and drones on the up – creating new jobs

Working together

In 1850 the French economist Frederic Bastiat published an essay titled: That Which is Seen, and That Which is Not Seen. The essay is most famous for introducing the concept of ‘opportunity cost,’ the limits, alternatives and choices – to obtain more of one thing, we give-up the opportunity of getting the next ‘best thing,’ or because we “can’t have it all,” we must decide what we will have and what we must forgo. That sacrifice is the opportunity cost of the choice.

Many argue that opportunity cost is applied in business, once the cost of marginal labor rises too high, it makes more sense to replace minimum wage jobs with robots or other automated technology – leading to increased production and profits.

Of course this is not a new phenomenon. In his 1850 essay, Bastiat wrote:

“A curse on machines! Every year, their increasing power devotes millions of workmen to pauperism, by depriving them of work, and therefore of wages and bread. A curse on machines!

This is the cry which is raised by vulgar prejudice, and echoed in the journals… machinery must injure labour. This is not the case.”

It is a cry echoed in media today, just as it was 164 years ago – 164 years during which humans have seen the greatest advancement of technological progress, resulting in more luxury goods, improved health, longer life expectancy, better housing, and sanitary, clean water, electricity, instant communication around the globe via the Internet for free, mobile phones, planes and automobiles, heart transplants, and so on. Ninety nine percent of the poorest people in the ‘developed world’ have amenities that the wealthiest people of Bastiat ‘s time could not imagine.

Machinery does reduce some labor, but as Bastiat points out new labor from new industries is quickly created. The very industry, robotics, that is said to be eliminating jobs is in fact creating hundreds of thousands of jobs.

According to the European Union Commission, by 2020, service robotics could reach a market volume of more than 60 billion euros per year, and are forecasting 240,000 new jobs in the EU alone, backed by an investment of Euro 2.8 billion during this period.

The International Federation of Robotics has reported that Robotics will be a major driver for global job creation creating more than one million jobs by 2016.

Many of these new jobs will come from investments into the Robotics sector which is currently experiencing a major boost.

Startup Robotic companies like Jibo blasted through their crowd funding campaign raising $1,270,193 in a matter of days against a goal of $100,000. Much of the investment will allow Jibo to recruit new staff as the company delivers its artificially intelligent robot helper.

Another robotics startup, Airware the drone manufacturer raised an additional $25 million series B round on top of the $12.2 million it raised in it’s A series round. The company said it had raised the new funding: “to build out its staff.”

The South Korean government is mooted to invest $2.5 billion US dollars by the end of 2018 in joint projects with robotics companies, creating more jobs and targeting more than $6 billion US dollars in annual sales.

Investors are flocking to stock listed robotic companies in the US and also in China, whose manufacturing sector has a healthy appetite for all things robotic.

Japan is building a huge drone fleet. The country will invest ¥3 billion (approximately $372 million) in the coming decade to drastically expand its military unmanned aerial vehicle (UAV) program.

An estimated $6.4 billion is currently being spent each year on developing drone technology around the world, according to a report published earlier this month by the Teal Group Corp.

Whilst jobs will disappear, there are literally hundreds of companies and governments investing tens of billions of dollars in drones and robotics and in doing so creating a significant number of new jobs.

The current generation of engineers and roboticists are making science fiction stories of magical realism come true and creating millions of jobs in the process. As Bastiat put it: “to curse machines is to curse the human mind.”

Update – see also GE Reports on the Cyborg Workplace.

Tech companies’ competitive advantage –Bayes Rule and Behavioral Economics

Bayes and his theoryThe ‘system’ behind the Google robotic cars that have driven themselves for hundreds of thousands of miles on the streets of several US states without being involved in an accident, or violating any traffic law, whilst analyzing enormous quantities of data fed to a central onboard computer from radar sensors, cameras and laser-range finders and taking the most optimal, efficient and cost effective route, is built upon the 18th-century math theorem known as Bayes’ rule.

In 1996 Microsoft’s Bill Gates described their competitive advantage as its ‘expertise in Bayesian networks,’ patenting a spam filter in 1998 which relied on Bayes Theorem. Other tech companies quickly followed suit and adapted their systems and programming to include Bayes theorem.

During World War II Alan Turing had used Bayes Theorem to crack the Enigma code, potentially saving millions of lives, and is credited with helping the allied forces victory.

Artificial Intelligence was given a new lease of life when in the early 1980’s Professor Judea Pearl of UCLA’s Computer Science Department and Cognitive System Lab introduced Bayesian networks as a representational device. Pearl’s work showed that Bayesian Networks constitute one of the most influential advances in Artificial Intelligence, with applications in a wide range of domains.

Bayes Theorem is based on the work of Thomas Bayes as a solution to a problem of inverse probability. It was presented in “An Essay towards solving a Problem in the Doctrine of Chances” read to the Royal Society in 1763 after Bayes’ death (he died in 1761). Put simply Bayes rule is a mathematical relationship between probabilities, which allows the probabilities to be updated in light of new information.

Before the advent of increased computer power Bayes Theorem was overlooked by most statisticians, scientists and in most industries. Today, thanks to Professor Pearl, Bayes Theorem is used in robotics, artificial intelligence, machine learning, reinforcement learning and big data mining.  IBM’s Watson, perhaps the most well known AI system, in all its intricacies, ultimately relies on the deceivingly simple concept of Bayes’ Rule in negotiating the semantic complexities of natural language.

Bayes Theorem is frequently behind the technology development of many of the multi-billion dollar acquisitions we read about, and certainly a core piece of technology behind the billions in profits at leading tech companies, from Google’s search to LinkedIN, Netflix’s and Amazon’s recommendation engines, and will play an even more important role in future developments within automation, robotics and big data.

Professor Pearl, through his work in the Cognitive System Lab, recognized the problems of human psychology in software development and representation. In 1984 he published a book simply called Heuristics (Intelligent Search Strategies for Computer Problem Solving).

Pearl’s book relied on research by the founder of Behavioral Economics Daniel Kahneman and Amos Tversky and particularly their work with Paul Slovic: Judgment under Uncertainty: Heuristics and Biases. Cambridge University Press, 1982, where they confirmed their own reliance on Bayes Theorem:

Ch.25: Conservatism in human information processing: “Probabilities quantify uncertainty. A probability, according to Bayesians like ourselves; is simply a number between zero and one that represents the extent to which a somewhat idealized person believes a statement to be true…. Since such probabilities describe the person who holds the opinion more than the event the opinion is about, they are called personal probabilities.” (Page 359)

Kahneman (Nobel Prize in Economics) and Tversky showed Bayesian methods more closely reflect how humans perceive their environment, respond to new information, and make decisions.  The theorem is a landmark of logical reasoning and the first serious triumph of statistical inference; Bayesian methods interpret probability as the degree of plausibility of a statement.

Kahneman and Tversky especially highlighted the heuristics and biases where Bayes Rule can overcome our irrational decision-making and this is why so many of the tech companies are seeking to train their engineers and programming staff with behavioral economics knowledge. We use the availability heuristic to assess probabilities rather than Bayesian equations. We all know that this gives way to all sorts of judgmental errors: a belief in the law of small numbers and a tendency towards hindsight bias. We know that we anchor around irrelevant information and that we take too much comfort in ever-more information that seems to provide us confirmation of our beliefs.

The representativeness heuristic

Heuristics are described as “judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course.

When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not make it more likely. This heuristic is used because it is an easy computation (Think Zipf’s law and human behavior – the principle of least effort). The problem is that people overestimate their ability to accurately predict the likelihood of an event. Thus it can result in neglect of relevant base rates (base rate fallacy) and other cognitive biases, especially confirmation bias.

The base rate fallacy describes how people do not take the base rate of an event into account when solving probability problems and is frequently and error in thinking.

Confirmation bias

Confirmation bias is the tendency of people to favor information that confirms their beliefs or hypotheses. Essentially people are prone to misperceive new incoming information as supporting their current beliefs.

It has been found that experts reassess data selectively, depending on their prior hypotheses over time. Bayesian statisticians argue that Bayes’ s theorem is a formally optimal rule about how to revise opinions in the light of evidence. Nevertheless, Bayesian techniques are, so far rarely utilized by management researchers or business practitioners in the wider business world.

Eliezer Yudkowsky of the Machine Intelligence Research Institute has written a detailed introduction of Bayes Theorem using behavioral economics examples and machine learning, which I highly recommend.

Time to think Bayesian and Behavioral Economics

As the major tech companies are showing, Bayesian and Behavioral Economics methods are well suited to address the increasingly complex phenomena and problems faced by 21st-century researchers and organizations, where very complex data abound and the validity of knowledge and methods are often seen as contextually driven and constructed.

Bayesian methods that treat probability as a measure of uncertainty may be a more natural approach to some high-impact management decisions, such as strategy formation, portfolio management, and decisions whether or not to enter risky markets.

If you are not thinking like a Bayesian, perhaps you should be.