Tuesday 31 December 2013

New Year's Resolutions

What have I learnt in 2013 and what do I hope to achieve in 2014?

This time last year I was suffering from persistent pain just under my ribs on the right hand side that was initially diagnosed as gallstones on 2 January, the presence of stones was confirmed by an ultrasound later in the month.  I then went through two more months of pain because the NHS consultant did not believe I had gallstones, statistically I was too young and not fat enough to be troubled by stones, ignoring the prior probability was not being updated with the information that I had pain centered on my gall-bladder.  Eventually I was admitted to hospital with jaundice and an infected gall-bladder.   A few months later my father-in-law was diagnosed with depression, in September, following mild paralysis they realised his mood change was a consequence of a brain-tumor, my father-in-law died on 26 December.   I learnt, as much of the British public have come to realise, the NHS is not as perfect as we like to think.

While in hospital, on morphine, it struck me that I should look into Pragmatism.  Donald MacKenzie had mentioned Pragmatism in 2012 and I was aware Poincare was linked to the philosophy, but I did not really understand it  why should I as a mathematician?  I spent the 8 weeks recovering and then most on June-July reading up on the topic.  Pragmatism enabled me, as an atheist, to reconcile Virtue Ethics with Financial Economics and I drafted my paper Reciprocity as the Foundation of Financial Economics.  This was a significant silver lining to the cloud of being in pain for three months.

In August I met Brett Scott (@Suitpossum) who, like me, was speaking at the Edinburgh Fringe's Cabaret of Dangeous Ideas.  It was good to meet Brett, who I admire greatly.  He was able to make explicit (for me) that there was an issue around mathematics obscuring, rather than enlightening, finance, and since we met this has become an increasingly important issue for me.  I also came to believe that, while Brett and I agree on many issues, our difference is he has a fundamental concern with scarcity while I have a fundamental concern with uncertainty.  Later in the month I was interviewed by David Fergusson at the Cabaret of Dangeous Ideas.  It was helpful for me that David saw some merit in the work I was doing, and we might collaborate on the relationship between science and religion in the future.  David recommended that I read After Virtue, which I am still in the process of completing, but I have read the "students' guide".  Following on from my meeting with David I met Paolo Quattrone and Michael Northcott and discussed issues relating to re-orienting finance.  Michael, as an Episcopalian, made some comments about how time is irrelevant in the Christian context but dominates finance, this seemed to link to Brett's views rooted in deep ecology.  I have thought a lot in the past about the relationship between randomness and time.

As well as these face-to-face interactions, Arthur Charpentier (@freakonometrics) is my modal "Favorite" on Twitter while Noah Smith (@Noahpinion) has prompted many of my blog posts.  Thanks go to Jon Harris (@jonone100) and Dave Marsay for useful comments on my blog and thanks to Mark Thoma (@MarkThoma / economistsview.typepad.com) for disseminating my posts.

In the latter quarter of the year, with the REF mayhem out of the way, I began to look forward to where my research should focus.  In April the IMA conference on Mathematics in Finance had taken place, with me as the (ill) Chair of the organising committee.  The Bank of England had provided input into the organisation of the meeting and they had highlighted the need for mathematicians to shift their perspective away from stochastic calculus and re-focus from micro- to macro-economic issues, as in Size and complexity in model financial systems, and address the concerns that would be identified in para. 89 of  v. II of Changing Banking for Good, that mathematics aids 'insincerity' in financial practice.  A bit later, around November 2012, Kenneth Lloyd, a software engineer responded to my piece Ethics and Finance: The Role of Mathematics and asked whether I had ever considered modelling financial networks based on reciprocity and profit maximisation and work out which would be "better".  Finally I have been following the emergent phenomena of peer-to-peer lending and crowdfunding, being an investor this year  in Harlaw Hydro, a crowdfunded community energy project.

As a result of these interactions I will be looking to follow up on Kenneth's suggestion and is to investigate whether financial systems are more or less resilient and effective if based on different commercial cultures, e.g.: profit/loss sharing (Islamic musharakah), if loan interest is based purely on objective risk born by lender (Scholastic usury prohibition), if the interest rate is determined by the opportunity cost (market based) or if interest aims to maximise returns to the lender.  I intend to model financial systems as   graphs  and study how the different commercial cultures  affect the evolution of the graph topology, and then how  money/credit is transmitted on different  graph topologies. Effectiveness will be measured by the efficiency in enabling lending  and resilience by the ability of a financial network to withstand shocks generated by losses. This research question addresses an issue moral philosophy: what is the relationship between ethics and the structure of the polis and is motivated by themes in Pragmatic philosophy, in particular the hypothesis reciprocity emerges as a social norm to enable resilient and effective communities were exchange is important. The different commercial cultures can be seen as representing points on a spectrum of commercial attitudes and in cultures where there are low interest rates the hypothesis is that there will be greater homophily between the agents in the network, and this will create more resilient and effective financial networks.  The research will aim to inform regulators of any intrinsic merits with emerging financial mechanisms, particularly crowdfunding and peer-to-peer lending, particularly as there is concern that the regulators will stifle the democratisation of finance.

If anyone is interested in being involved in this project, please do get in touch.

Happy New Year!

Thursday 19 December 2013

Is finance guided by good science or convincing magic?

Noah Smith posted a piece on "Freshwater vs. Saltwater" divides macro, but not finance.  As a mathematician I did not really understand the argument (a nice explanation is here) but there was a comment from Stephen Williamson that really caught my attention
Another thought. You've persisted with the view that when the science is crappy - whether because of bad data or some kind of bad equilibrium I guess - there is disagreement. ... What's at stake in finance? The flow of resources to finance people comes from Wall Street. All the Wall Street people care about is making money, so good science gets rewarded. I'm not saying that macroeconomic science is bad, only that there are plenty of opportunities for policymakers to be sold schlock macro pseudo-science.
 What I aim to do in this post is offer an explanation for the 'divide' in economics from the perspective of moral philosophy and on this basis argue that finance is not guided by science but by magic.

As a Scottish Marxist in the 1950s, Alasdair MacIntyre wanted to establish if Stalinism could be reasonably criticised without undermining Marxism.  In 1981 MacIntyre published his conclusion: modern moral philosophy was incapable of reasonably criticising anything because it had was dominated by arguments based on willpower.  MacIntyre argues (my naive interpretation) that society should focus on where it wants to get to, and then work out how to get there, and this is the Aristotelian approach that was abandoned in the nineteenth century.  The dominant philosophical approach is that science establishes principles and society is a deductive consequence of those principles.  The Nietzschean approach is to dominate the discussion of the principles, through willpower, and so determine the  direction that society travels in.

For example all (British) mainstream political parties believe in 'fairness', but we never discuss what a fair society actually looks like.  The debate focuses on the conflicting principles that fairness is founded on equal allocation of resources or that it is based on equal opportunity, these apparently similar principles lead to very different political principles.  The contemporary economic debate seems to be governed by the conflicting principles of whether you are pro-austerity or pro-deficit, not a discussion of what type of society people want and how, whether through tight or loose monetary policy, will this society be achieved.

I think their is a symbiotic relationship between the Nietzschean approach and academics.  If the focus of the debate switches to the democratic deliberation of what type a society people want, scientists become the servants of society.  In the current system, scientists become the guides of society.  MacIntyre's moral philosophy emasculates academics and it is not surprising many regard it as relativistic mumbo-jumbo.

Relativism is a problem if differing points of view are seen as exclusive options.  The solution to relativism is not to shrug your shoulders and accept the sacrifice of children is acceptable in a cultural context, it is to engage in deliberation that attempts to understand the reasons for the action and to reflect on where there are weaknesses in your own cultural norms.  The disagreement in macro-economics is an indication that economists see some benefit in the process of public deliberation.

The current political paradigm includes the principle that policy should be 'evidence based', reliant on 'science', without much thought to the how 'science' is constructed.  The problem with the economic debate is the discussion is focusing on the scientific questions not the democratic questions.

Most physical scientists will have given up a few paragraphs ago, because scientists are committed to the belief that they are guided by 'nature'
What these types of scientists generally refuse to acknowledge is that they choose which questions to answer.  In The Value of Science Poincare argued  against the idea popular today "science for science's sake" rather the scientist should concern themselves with identifying the hidden  connections between the apparent facts in the service of society (mathematics is there to do this when experimentation is not possible).  A good scientist is someone who asks the important questions.

The advantage that physical scientists have over social scientists is they have a significant degree of autonomy over what they study.   People believe that cosmology and particle physics are important because there are a lot of (historically) well funded cosmologists and particle physicists telling them that these are important and they close the discussion down by appealing to the almost divine authority of 'Nature'.  Brian Cox seems to really believe that he is guided by Nature in determining it is more important to look for the Higgs Boson than working out obtuse questions, such as "what is money". The result is the chattering classes have a better comprehension of quantum mechanics than the relatively straightforward financial system and the effect is that people are perplexed by financial crises and unable to formulate coherent responses to them.  Cox has said the money spent on the finding the Higgs Boson (around $13 billion, the accounts are not clear, a cynic might say that at this cost they were bound to prove the maths) was well spent in comparable to the £38 billion banking support - the point is the finding the particle does not contribute to mitigating financial disasters and the UK's share of the LHC funding, in retrospect, could have been better spent.

In the 1902  Marcel Mauss and Henri Hubert wrote in A General Theory of Magic
 The magician is a person who, through his gifts, his experience or through revelation, understands nature and natures... Owing to the fact that those magicians came to concern themselves with contagion, harmonies, oppositions, they stumbled across the idea of causality, which is no longer mystical even when it involves properties which are no way experimental
The two distinguish magic and science by observing that magic is based on belief in a set of rituals.  A person will only consult a magician if they  have faith in the actions that the magician will perform.  Science is not based on belief in its theorems, the equivalent of magic's rituals, but on a belief in the process by which science is created.  This is a subtle point, but the effect is that magic is necessarily static, a contemporary astrologer would have more authority if they claimed to be experts in ancient knowledge.  Similarly, most religions claim to encapsulate what is permanent in a changing world. On the other hand, science is necessarily  dynamic, we trust modern science's explanations of cosmology more than  those of the Babylonians.

The implication of this  distinction is that either mathematics exists independently of human thought and mathematicians discover theorems,  Platonism or `Mathematical Realism' and  mathematics is immutable, as Augustine claimed, or mathematics is created by living, breathing, mathematicians in response to the world around them.  The advantage of Platonism is that it provides scientists with a stable framework in which they can work, and is regarded as many scientists, such physicists Roger Penrose,  as an invaluable tool.  On the other hand, the implication of Anti-Platonism is that mathematics is dependent on society's attitudes, and its claims to certainty are as strong as the claims to certainty of the social sciences.

While magic and science are distinguished by static or dynamic belief,  Mauss and Hubert distinguish magic and religion by hidden and open belief
Where religious rites are performed openly, in full public view, magical rites are carried out in secret... and even if the magician has to work in public he makes an attempt to dissemble: his gestures become furtive and his words indistinct.
The suggestion is, for science to be reputable and maintain a divide with magic, it needs to be carried out, like religion, in the open.  As soon as either science or religion takes place out of the public arena, they risk degenerating into magic.  Today, many scientists, in particular social scientists, regard scientific knowledge as 'shared belief', not necessarily 'justified belief', science is less about 'truth' and more about 'consensus' with  an Italian definition of science
the speculative, agreed-upon inquiry which recognizes and distinguishes, defines and interprets reality and its various aspects and parts, on the basis of theoretical principles, models and methods rigorously cohering
Science is speculative, not certain, and agreed-upon, not secret.  It is on this basis that society can begin to understand the value of science.

So, in response to Stephen Williamson's implication, and it is an implication he does not make the statement,  that finance is more scientific I have two comments.  Firstly I agree that the focus of contemporary finance is about "making money" and this provides a clear objective for the discipline to work towards.  The question I would pose is "making money" a good internal to the practice of finance? I would argue that the good internal to finance is the effective distribution of money to fund economic activities.  The monomania of Wall Street  needs to be challenged and I wonder if a re-orienting of finance to focus on (my view of) its internal goods will result in such a 'scientific' finance.

The second issue is highlighted in the UK Parliament's report on banking.  The document makes few references to the role of mathematics in finance, but where it does it is damning
89. The Basel II international capital requirements regime allowed banks granted
“advanced status” by the regulator to use internal mathematical models to calculate the risk
weightings of assets on their balance sheets. Andy Haldane described this as being
equivalent to allowing banks to mark their own examination papers. A fog of complexity
enabled banks to con regulators about their risk exposures:
[...] unnecessary complexity is a recipe for […] ripping off […], in the pulling of the wool over the eyes of the regulators about how much risk is actually on the balance sheet, through complex models.
The science, the mathematics, is not being used to enlighten finance but to obscure its practices.  Recently the report on J.P. Morgan's London Whale revealed how tweaking their model, the bank could reduce their apparent exposure from around $40  billion to $20 billion.  The Whale report highlights how finance is actually more committed to 'rituals' around risk management than the 'science' of risk management, and this seems to be facilitated by mathematics.

I think there are a variety of factors involved in this obfuscation, not least the culture of associating mathematics with hidden truths: the mathematician has a magical key to financial reality.  There is also a metaphorical issue.  At the start of the seventeenth century Francis Bacon is associated with using the metaphor of Science as masculine probing and taming the feminine Nature. Towards the end of the seventeenth century the metaphor of finance as 'Lady Credit' similarly emerges, and I think there has been a similar sense that a masculine Science can tame the fickle and unruly Lady Credit.  I think both relationships could improve by becoming less dysfunctional.  Ultimately, and untypically, I associate the failures of contemporary finance not with its own unruliness but with the interference by a deterministic scientific ethos.  Economics might appear  incoherent, but it is finance's coherence in the wrong direction, that causes more real problems.

Tuesday 10 December 2013

Language barriers in understanding risk and uncertainty

Arthur Charpentier and Dave Giles  reminded me of the disagreement over the terms random and chance variable.
and brought to my mind the misunderstandings that can occur when economists use the word "risk".  Following Frank Knight, economists will often use the word "risk" to represent a 'known probability', where as an "uncertainty" is an 'unkown' probability.  Knight gives a business example:
the bursting of bottles does not introduce an uncertainty or hazard into the business of producing champagne; since in the operations of any producer a practically constant and known proportion of the bottles burst, it does not especially matter even whether the proportion is large or small. The loss becomes a fixed cost in the industry and is passed on to the consumer, like the outlays for labour or materials or any other.
I have never found Knight's distinction particularly helpful because  the word "risk" is commonly used to represent the possibility of a loss:
 OED 1. (Exposure to) the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility. 
with the first example appearing in 1621.  The word originates in post-classical Latin, either from the classical Latin  resecare, "something that cuts" i.e. rocks, or from the Arabic rizq which has a number of meanings: ‘provision, lot, portion allotted by God to each man’, ‘livelihood, sustenance’, hence ‘boon, blessing (given by God)’, ‘property, wealth’, ‘income, wages’, and finally ‘fortune, luck, destiny, chance’.

I prefer to use "chance" in place of Knight's "risk"
OED 1 The falling out or happening of events; the way in which things fall out; fortune; case.
from the Latin cadere, 'to fall', highlighting the relationship between "chance" and the roll of the dice.  The association between "chance" and probability was established by de Moivre with his Doctrine of Chances.  While it is unwise to contradict the Oxford English Dictionary (OED) I tend to think chance is derived from the Dutch word kans.

Ian Hacking discusses the problems Huygens, who like all Dutch mathematicians of the time wrote firstly in Dutch and then translated into Latin for an international audience, had in translating the work kans, which according to Google translate can mean either 'chance' or 'opportunity'.  The obvious Latin equivalent for kans as chance would have been sors, 'lot'. Huygens, or possibly his editor Schootens, chose expectatio, highlighting the association between chance and opportunity, where as Knight had associated it with risk.  An alternative to expectatio that Huygens considered was spes, which was the Latin word for the Christian virtue 'Hope' and is related to a Roman goddess for hope.  The French still employ the word esperance for mathematical expectation while the English use expectation (the Dutch use verwachting which has a number of translations (depending on the context): 'hope, promise, expectation, forecast, prognosis').

Knight did not really innovate his use of "risk", according to the OED it originates with de Morgan
OED2b. The error of an observation or result considered without regard to sign; the probability of an error; the mean weighted loss incurred by a decision taken or estimate made in the face of uncertainty; spec. = mean-square error 
De Morgan defined it in 1832
 This is what Laplace calls l'erreur moyenne à craindre en plus, and the corresponding error en moins is of the same magnitude with a different sign. We shall call it the risk of the observation, the sign of the error not being considered.
 My interpretation is that De Morgan's thinking is closely related to Quetelet's - a deviation from the norm is a risk - see my previous post on this.

The word "random" seems very inappropriate, apparently originating in Middle French randon, meaning 'speed' or 'hast'.  We have
OED A1a. Impetuosity, great speed, force, or violence (in riding, running, striking, etc.); chiefly in with (also in) great random .  (earliest occurrence 1325)
OED A2a. Gunnery. The range of a piece of ordnance, esp. the long or full range obtained by elevating the muzzle of the piece.  (earliest occurrence 1560)
OED A 3. A haphazard or aimless course.  (earliest occurrence 1565)
and the first use as an adverb and adjective
OED B1 At random, randomly. (earliest occurrence 1619)
OED C1a. Having no definite aim or purpose; not sent or guided in a particular direction; made, done, occurring, etc., without method or conscious choice; haphazard.  (earliest occurrence 1655)
The first use in mathematics was in 1884:
Applying the Calculus of Probabilities..to the question of whether the distribution of the fixed stars can be regarded as the result of a random sprinkling.
A mathematician might use stochastic as an adjective in preference to random or chance.  The word derives from the Greek 'to aim at a mark, guess' according to the OED, my Greek PhD supervisor said it related to shooting arrows.  Its first appearance in English was in 1662
But yet there wanted not some beams of light to guide men in the exercise of their Stocastick faculty.
and then in 1688 in Cudworths Treatise on Free Will
There is need and use of this stochastical judging and opining concerning truth and falsehood in human life.
What do I conclude:  there is no certainty in what we are talking about when using words like risk, chance, random etc. and it is not surprising we have difficulty dealing with the concept.




Tuesday 3 December 2013

The rational man, the average man and the replacement of deliberation by will


A few weeks ago I changed my broadband supplier. Things were a bit ropey the first weekend and my son had problems watching Lego Star Wars videos, and he generously shared his frustration with me. The following week I had a customer service call, and when I gave the service 3 out of 10, the person on the line said “That’s great”. When I queried why 3 out of 10 was great, the answer I got could be interpreted as 3/10 facilitated a Normal distribution of satisfaction. ‘Big data’, Bayesian inference and such like are big themes in the contemporary Zeitgeist and I sometimes think that there is an attitude that if a distribution is not Normal, it is pathological, even in the case of customer satisfaction. This is an issue to me, as someone who believes that, in science at least, dependence is far more important than the independence that creates the link between Normality and the Central Limit Theorem.

This piece is about how the Romantic’s ‘average man’, the personification of the Law of Large Numbers and the Central Limit Theorem, replaced the Enlightenment’s ‘rational man’ and some thoughts on the consequences.  The post meanders from the Petersburg game, through Enlightenment education, to Laplacian determinism, social physics, biometrics, MacIntyre's Virtue Ethics and Austrian economics.
Jean Le Rond d’Alembert is famous in mathematics for solving the problem of a vibrating violin string and solving the fundamental differential ‘wave equation’ in the process. He was abandoned as a baby by his mother outside the Parisian church of St Jean Baptiste le Rond in 1717 and adopted by a relatively poor family. It turned out that d’Alembert’s natural father was a chevalier and a distinguished army officer, Louis-Camus Destouches, while his mother was a famous writer and socialite, Claudine Guérin de Tencin, Baroness of Saint-Martin de Ré. D’Alembert’s adoptive family was provided with money by his natural parents for Jean to study and he became a lawyer when he was twenty-one. He taught himself mathematics, being admitted to the Académie Royal des Sciences in 1741 and developing a reputation as one of Europe’s leading mathematicians by his mid-thirties.

D’Alembert also became a well known figure in Parisian society, living, ‘unconventionally’ with a famous salon-owner, Julie de Lespinasse, and working with Denis Diderot on the French Enlightenment’s Encyclopédie, which paved the way for the French Revolution. D’Alembert was sympathetic to the Jansenists, the sect Pascal was associated with, and played a role in the expulsion of the Jesuits from France in the early 1760s. Despite becoming the Secretary of the Académie Royal des Sciences, the most influential scientist in France, d’Alembert, on account of his atheism, was buried in an unmarked grave when he died in 1783.

D’Alembert lived at the height of the debates around the Petersburg game and took a rather extreme view about probability; since perfect symmetry is impossible probability can never be objective. Because, science was supposed to be objective during the Enlightenment [7, p 11], the apparent subjectivity of probability led d’Alembert to be sceptical about the whole field [4]. In fact he was possibly the first person to criticise probability for ignoring psychology, when he commented that a paper by Daniel Bernoulli advocating smallpox inoculation by calculating the gain in life expectancy ignored that ‘reasonable men’ might well trade the long-term risk of small-pox for the short term risk associated with inoculation [8, p 18].

Despite this scepticism d’Alembert did provide some insight into the Petersburg game. For him, the apparent paradox arose because the game could continue for ever, for an infinite number of coin tosses. It was absurd to believe the game could offer an infinite payoff, at some point time and money would run out, and d’Alembert suggested that the game should end after the person, or the ‘casino’, putting up a stake, was bankrupted.

This line of thought was developed by the Marquis de Condorcet. Condorcet was born legitimately into the nobility, in 1743, and so was twenty-six years younger than the less blessed d’Alembert. In his early twenties, after a good education, he wrote a treatise on integration, and was elected to the Académie Royal des Sciences in 1769. After publishing another work on integration he met Louis XVI’s finance minister, Turgot, and following Newton’s footsteps, was appointed Inspector General of the Paris (French) Mint in 1774. In spite of being a member of the Ancien Régime, Condorcet had liberal views, supporting women’s rights and opposing the Church and slavery, and when the Revolution started he was elected to Revolutionary government. However, this was not a good position to be in when the Terror began, and Condorcet went into hiding in October 1793. Fearing his political opponents were on to him he fled Paris in March 1794, but was almost immediately captured and imprisoned, dyeing in unexplained circumstances at the end of March, four months before the end of the Terror.

Condorcet is important in linking mathematics to the social sciences, possibly through the influence of his boss, Turgot. During the height of Louis XIV’s reign the dominant economic theory was mercantilism, which can be summed up as the belief that wealth equated to gold. Around the time of the Seven Years War and the subsequent Mississippi Bubble, a new theory emerged in France in which wealth was determined not by coin but by what a country produces, in particular, its agricultural production. These ideas were developed in the mid-eighteenth century as physiocracy (’rule by nature’), particularly by Turgot and Quesnay, who achieved fame as the royal physician. Quesnay in his ‘Economic Table’, saw the economy as a system where by the surplus of agricultural production flows through society, enriching it.

Physiocracy was popular with the aristocrats of the Ancien Régime, because it argued that all wealth originated from the land, and so the landowning class was central to the economy, with merchants being mere facilitators of the process. While the Scotsman Adam Smith is often cited as the first modern economist, he was in fact developing, in a none the less revolutionary way, the ideas of the French physiocrats [1, p 61], [13, p 165]. One of Smith’s important contributions was that it was not land, but labour, that was at the root of wealth.

Despite working at the Mint, Condorcet did not produce anything of significance in economics, though his most important work , ‘Essay on the Application of Analysis to the Probability of Majority Decisions’, was in social science. In the essay he shows that in a voting system, it is possible to have a majority preferring option A over B, another majority preferring option B over C while there is another majority that prefers C over A; no option is dominant, which is known as Condorcet’s paradox. Another influential work was ‘Historical Picture of the Progress of the Human Mind’, written while in hiding and arguing that expanding knowledge, in both physical and social sciences, would lead to a more just, equitable and prosperous world.

In relation to the Petersburg game, Condorcet starts of by making a trivial, but none the less important, observation. According to Huygens, if you play a game where you will win 10 francs on the toss of a head and lose 10 francs on the toss of a tails, the mathematical expectation is that you will win (lose) nothing. But the reality is that you would win 10, or lose 10, francs. Condorcet realised that the mathematical expectation gave the price of the Petersbug game over the long run, in fact a very long run that would accommodate an infinite number of games, each game having the potential to last an infinite number of tosses.

Having made this observation Condorcet put more structure on the problem; say the number of tosses was limited, by the potential size of the winning pot. According the the philosopher and science historian, Gérard Jorland, Condorcet then solved the problem by thinking “of the game as a trade off between certainty and uncertainty” and established its value was a function on the maximum number of coin tosses possible [10, p 169]– the value of the game was only infinite if there could be an infinite number of tosses.

According to Jorland, the Petersburg problem “would have most likely faded away” had Daniel Bernoulli’s treatment of it had not been endorsed by the man sometimes referred to as the ‘Newton of France’, Pierre-Simon Laplace. Laplace was born, in 1749, into a comfortable household in Normandy, the family were farmers but his father was also a small scale cider merchant. He enrolled as a theology undergraduate at the University of Caen when he was 16, but left for Paris in 1768, without a degree but with a letter of introduction to d’Alembert. D’Alembert quickly recognised Laplace’s skills, and as well as taking responsibility for his mathematical studies secured him a teaching position at the École Militaire, where he taught the young Napoleon Bonaparte in 1785. Laplace’s early work was related to calculus, and by 1773 he had presented 13 papers to the AcadémieRoyal des Sciences, despite this productivity Laplace had failed twice, in 1771 and 1772, to be elected to the Académie, prompting d’Alembert to write to Lagrange, who was the mathematical director at the Berlin Academy of Science, asking whether it would be possible to get Laplace elected to the Berlin Academy and a job found for him in Germany. However, before Lagrange could reply, Condorcet, who was Secretary at the Académie, pulled some strings and Laplace was admitted to the centre of French science in 1773.

Laplace’s reputation is built on two pairs of mathematical texts, ‘Celestial Mechanics’ with ‘The system of the world’ (1796) and‘Analytic Probability Theory’ (1812) with ‘Probability Theory’ (1819). The first book in each pair was a technical, mathematical, description of the theory while the second book in each pair was a description for general audiences.‘Celestial Mechanics’ is now regarded as the culmination of Newtonian physics while in ‘Analytic Probability Theory’ Laplace closed the discussion on the Petersburg Game. Laplace adopted Daniel Bernoulli’s approach, re-stating his three results as [10, pp 172–176]
  1. A mathematically fair game is always a losing game under ‘moral expectation’ (utility theory).
  2. There is always an advantage in dividing risks (diversification).
  3. There may be an advantage to insure.
Laplace solved the paradox that‘moral expectation’ differed from ‘mathematical expectation’, by showing that if games could be repeated infinitely many times, or risks divided into infinitesimally small packages, then ‘moral expectation’ equalled ‘mathematical expectation’.

Laplace’s mentor, Condorcet, believed that nature followed constant laws and these could be deduced by observation,“The constant experience that facts conform to these principles is our sole reason for believing them” [5, quoted on p 191]. Laplace is closely associated with this idea, that of of ‘causal determinism’, and is encapsulated in his ‘Philosophical Essay on Probabilities’(1814),
We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.
and he goes on to say that
[and we owe] to the weakness of the human spirit [i.e. it is not as intelligent as the ‘intellect’] one of the most delicate and most ingenious theories of Mathematics, which is the science of chance or probability
For Laplace, the roll of a dice is not random, given precise information of the position, orientation and velocity of a dice when it left a cup, the result of the roll was perfectly predictable [11, p 65]. At the heart of Laplace’s determinism was knowledge and probability was a measure of ignorance, and not of ‘chance’. It is in this respect that he is close to James Bernoulli [8, p 11], but, as a product of the Enlightenment, Bernoulli’s God is replaced by‘an intelect’, Laplace’s demon. The positions of Laplace and Bernoulli, differs significantly from the ‘Ancients’ who, distinguished between the predictable (eclipses), the foreseeable (the weather) and the random (stumbling on a treasure).


A persistent problem with determinism is that it can lead to a collapse in moral responsibility. In the syllogistic tradition, starting with the premise that humans have no free will it is easy to come to the conclusion that anything goes. For example,
P1.
Actions are either pre-determined or random.
P2.
If an action is pre-determined, the entity who
performed the action is not morally responsible.
P3.
If an action is random, the entity who performed
the action is not morally responsible.


C.
No one is morally responsible for their actions.

This was not simply a philosophical issue, in the mid-nineteenth century there was real concern that by allowing steam boiler makers, for example, to insure themselves against deadly explosion of their products would “undermine the very virtues of foresight and responsibility”. Removing risk seemed to remove peoples sense of responsibility [5, p 188].

The issue is that, for the syllogistic method to come up with an answer that most people would be comfortable with, we need to include morality as a premise, rather than looking for it as a conclusion. Doing this, we can change the argument to
P1.
People should be held morally responsible for their actions.
P2.
If someone (i.e. a child) cannot forsee the consequences of
their actions they cannot be held morally responsible
for their actions.
C.
Moral responsibility requires that there be foresight.

One of the consequence of The Enlightenment was a belief that in order to be ‘morally responsible’, people needed to have a degree of foresight, which could only be obtained through knowledge, or science and today, this can be seen as the fundamental purpose of science, to enable people to take responsibility for their actions, whether related to the safety of industry or personal diet. This was reflected in Wilhelm von Humboldt’s view that education should turn ‘children into people’,individuals capable of participating in the polis/civitas, rather than ‘cobbler’s sons into cobblers’,Francis Bacon’s utilitarian view that ‘knowledge is power’.

The development of probability in the eighteenth century had been motivated by the view that while absolute certainty was beyond human grasp, mathematics, on which the Scientific Revolution had been based, might be a way of discerning regularity out of uncertainty [5, pp xi–xvi]. In this vein, the late-eighteenth century mathematicians regarded probability as way of turning rationality into an algorithm, which could then be distributed to everyone to help them to be able to be more responsible, to become l’homme éclaire, the clear thinking, rational, Enlightenment ideal [5, p108–111].

The tangible product was Gauss’s (nineteenth century) approach to dealing with astronomical errors that proved so invaluable in the physical sciences that it was adopted in the social sciences, in the field of social physics. Social physics was invented by the Belgian astronomer Adolphe Quetelet who applied Gauss’s theories to human behaviour in his 1835 work ‘On man and the development of his faculties, or Essay on Social Physics’. The term ‘social physics’ had been coined by the French philosopher, Auguste Comte, who, as part of his overall philosophy of science, believed that first humans would develop an understanding of the ‘scientific method’ through the physical sciences, which they would then be able to apply to, the harder and more important,‘social sciences’. When Comte realised that Quetelet had adopted his term of‘social physics’, Comte adopted the more familiar term, sociology for the science of society.

An explosion of data collection after 1820 enabled a number of people to observe that certain ‘social’ statistics, such as murder, suicide and marriage rates were remarkably stable. Quetelet explained this in terms of Gaussian errors. L’homme moyen,‘the average man’, was driven by ‘social forces’, such as egoism, social conventions, and so on, which resulted penchants, for marriage, murder and suicide, which were reflected as the means of social statistics. Deviations from the means were as a consequence of the social equivalent of accidental or inconstant physical phenomena, such as friction or atmospheric variation [11, Section 5], [15, pp108–110].

These theories were popular with the public. France, like the rest of Europe, had been in political turmoil between the fall of Napoleon Bonaparte in 1813 and the creation of the Second Empire in 1852, following the 1848 Revolution (setting the prototype for the turmoil between the 1920s and 1970s). During the 1820s there was an explosion in the number of newspapers published in Paris, and these papers fed the middle classes a diet of social statistics that acted as a barometer to predict unrest [15, p 106]. The penchant for murder implied that murder was a consequence of society, the forces that created the penchant were responsible and so the individual murderer could be seen as an ‘innocent’ victim of the ills of society.

Despite the public popularity of‘social physics’, Quetelet’s l’homme moyen was not popular with many academics. Quetelet had based this theory on an initial observation that physical traits, such as heights, seemed to be Normally distributed. The problem was that, apart from the fact that heights are not Normally distributed (the incidence of giants and dwarfs in the real population exceeds the expected number based on a Normal distribution of heights Quetelet was confusing ‘looks like’ with ‘is’), since murders and suicides are ‘rare’,there can be little confidence in the statistics, and many experts of the time, including Comte [3, p 39], rejected Quetelet’s theories on the basis that they did not believe that ‘laws of society’ could be identified simply by examining statistics and observing correlations between data ( [8, pp 47–48], [11, p76], [15, p 112]) and even Quetelet, later in life counselled against over-reliance in statistics [16].

Beyond these practical criticisms there were philosophical objections. The l’homme moyen was a ‘statistical’ composite of all society who was governed by Condorcet’s universal and constant laws. L’homme moyen was nothing like the Enlightenment’s l’homme eclaireé, the person who applied rational thinking to guide their action, thinking that was guided by science and reason and not statistics. The decline of Quetelet’s theorems in Europe coincides not just with the political stability of the Second Empire, but a change in attitude. The poor were no longer unfortunate as a consequence of their appalling living conditions, but through their individual failings, like drunkenness or laziness. The second half of the nineteenth century was about ‘self-help’ not the causality of ‘social physics’[15, p 113].

However, Quetelet’s quantitative methods would take hold in Britain. In 1850, Sir John Herschel, one of the key figures of the Age of Wonder, reviewed Quetelet’s works and concluded that the Law of Errors was fundamental to science [2, p184-185]. In 1857, Henry Thomas Buckle published the first part of a History of Civilisation in England, which was an explanation of the superiority of Europe, and England in particular, based on Quetelet’s social physics.  Francis Galton combined the work of his half-cousin, Charles Darwin, with that of Quetelet to come up with a statistical model of Hereditary Genius in the 1870s and in the process introduced the concepts of ‘reversion to the mean’ and statistical correlation. At the start of the twentieth century Galton’s statistical approach, was championed by Karl Pearson who said that the questions of evolution and genetics were “in the first place statistical, in the second place statistical and only in the third place biological” [8, p 147], and the aim of biologists following this approach was to “seek hidden causes or true values” behind the observed data processed with statistical tools [6, p 7].

In the late-nineteenth century the approach of these, predominantly, British biometricians collided, pretty much, head on with those that the monk, Gregor Mendel. In the 1860s Mendel looked at the mechanism of breeding hybrids and essentially developed a theory of how variation appears in living organism by experimenting on individual peas plants in his garden, rather than referring to population statistics. Mendel was interested in how does a microscopic effect, how two pea plants producing a hybrid, manifest itself at the macroscopic level, in statistical regularities, this is essentially a probabilistic, mathematical, approach: going from the particular to the general [9, pp 54-56].

The debate in biology between the biometric and Mendelian approaches was one about how to improve society through the process of heredity. If solved correctly, the social engineers of the late nineteenth century believed they could breed out laziness and drunkenness through the ‘science’ of eugenics. Could the secrets of heredity be discovered by observing statistical correlations, or did the solution lie in identifying the biological law [8, pp 145–152]. The biometric and Mendelian approaches were eventually reconciled by the “statistically sophisticated Mendelian”, Ronald Aylmer Fisher [8, p 149] in his 1930 book The Genetical Theory of Natural Selection and of whom Anders Hald has described as “a genius who almost single-handedly created the foundations for modern statistical science”.

Lots of people have suggested I read Alisdair MacIntyre’s After Virtue, which I recently attempted, but the book is ‘thick’ and I have resorted to Reading Alasdair MacIntyres After Virtue as a gentle introduction. MacIntyre’s thesis is that sometime in the eighteenth/nineteenth centuries Western philosophy lost its ability to address moral issues. Essentially, modern moral philosophy is a Nietzschean battle of wills, with opposing sides in a debate employing scientific authority and raw emotion to justify pre-determined political objectives (think climate science). (Lutz claims that) MacIntyre associates the origins of this failure are in Ockham’s Nominalism and the influence of eighteenth century Augustinian philosophers (Locke, Hume, Jansenists, etc.). This is immediately interesting to me as I think both themes are important, for different reasons.

What has particularly struck me is that Hume is presented as arguing that an individual can calculate what is in their best interests and hence choose a course of action to take, perhaps outside what is the moral norm. I need to explore this further, because it relates Hume’s ideas about individual autonomy to a belief in causal determinism: that the agent can relationally foresee the future. Also, Hume argued that reason was subservient to the passions, that is there are animal behaviours that will inevitably over-rule reason. I see this theme featuring in eugenics, sociobiology and even in the collection of articles Moral Markets, and is not a proven phenomena.

Oskar Morgenstern, and the Austrian economists more generally, was concerned with this problem. When Morgenstern was twelve, in 1914, his family moved to Vienna and, in his own words he was“deflected to social sciences by war [the First World War]; inflation and revolution in the streets, home difficulties but not by deep intellectual attraction” [14, p 128]. Morgenstern studied economics at Vienna, then dominated by Ludwig von Mises, gaining his doctorate in 1925. He then travelled to Cambridge and the United States, returning to Vienna for his habilitation thesis in 1928 was entitled Wirtshaftsprognose (‘Economic prediction’) and, in the Austrian tradition, rejected the use of mathematics in favour of a philosophical consideration of the difficulties of forecasting in economics when other agents are acting in the economy [12, p 51]. Following his habilitation, Morgenstern was appointed a lecturer at Vienna and then the director of the Vienna Institute of Business Cycle Research.

Unlike many of his economic colleagues, Morgenstern became involved in the Vienna Circle of mathematicians and philosophers, never as an active participant but as a bridge between them and economics [12, p 52]. In 1935 he presented a paper to the mathematicians associated with the Vienna Circle on the problem of perfect foresight. Menger often referred to an episode in Conan Doyle’s story‘The Final Problem’, which describes the ‘final’ intellectual battle between Sherlock Holmes and the fallen mathematician, Professor Moriarty which results in them both falling to apparent death in the Reichenbach Falls. At the start of the adventure Holmes and Watson are trying to flee to the Continent, pursued by the murderous Moriarty. Watson and Holmes are sat on the train to the Channel ports
[Watson]“As this is an express and the boat runs in connection with it, I should think we have shaken [Moriarty] off very effectively.”
“My dear Watson, you evidently did not realise my meaning when I said that this man may be taken as being quite on the same intellectual plane as myself. You do not imagine that if I were the pursuer I should allow myself to be baffled by so slight an obstacle. Why, then, should you think so lowly of him?”
For Morgenstern this captured the fundamental problem of economics. While Frank Knight had earlier realised that profit was impossible without unquantifiable uncertainty, Mogernstern came to think that perfect foresight was pointless in economics. If the world was full of Laplacian demons making rational decisions then everything would, in effect, grind to a halt with the economy reaching its equilibrium where it would remain forever. Morgenstern writes
always there is exhibited an endless chain of reciprocally conjectural reactions and counter-reactions. This chain can never be broken by an act of knowledge but always through an arbitrary act — a resolution. [14, quoting Mogernstern on p 129]
The imp of the perverse would confound Laplace’s demon by doing something unexpected, irrational and inspired. It was this feature of the social science that makes it fundamentally different from the natural sciences, since physics is never perverse.

MacIntyre’s alternative to the bun-fighting that typifies modern moral philosophy is a return to an Aristotelian approach characterised by public deliberation considering not only means to predetermined ends but also what ends are in the ‘public good’. This, at first sight, appears to be a Pragmatic argument. However, I think there must be a difference because of MacIntyre’s apparent criticism of Ockham’s Nominalism. I think this might be important, specifically I have always associated Ockham’s Nominalism with his ‘doubt’ in the human ability to understand God, i.e. it is related to uncertainty. My initial thoughts are at the root of MacIntyre, as at the root of Aristotle, is Realism.

What interests me is that the idea that reciprocity is at the heart of Financial Mathematics is unorthodox today, but it would have been orthodox in the eighteenth century. Academic debate in the eighteenth century centred on (semi) public deliberation, at the Academies, salons or the monthly meeting of the Lunar Men. This approach disappears in the first half of the nineteenth century; individual scientists retreat to the lab, and perform magical experiments to the public, or embark on journeys of exploration enduring a process that moulds their genius: Alexander Humboldt, Darwin, Lewis & Clark. Around the same time, Laplace's causal determinism manifests itself in social science as social physics, with people being `governed' by the Normal distribution. Just as today businesses are applying Bayesian statistics to make decisions for us, taking away the requirement to discuss and the ability to act arbitrarily.


References

[1] M. Blaug. Economic Theory in Retrospect. Heinemann, second edition, 1968.
[2] S. G. Brush. The Kind of motion we call heat: A history of the kinetic theory of gases in the 19th century. North-Holland, 1976.
[3] I. B. Cohen. Revolutions, Scientific and Pobabilistic. In L. Kruger, L. J. Daston, and M. Heidelberger, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[4] L. J. Daston. D’Alembert’s critique of probability theory. Historia Mathematica, 6:259–279, 1979.
[5] L. J. Daston. Classical Probability in the Enlightenment. Princeton University Press, 1998.
[6] G. Gigerenzer. The Probabilistic Revolution in Psychology - an Overview. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 2: Ideas in the Sciences. MIT Press, 1987.
[7] G. Gigerenzer. Probabilistic Thinking and the Fight against Subjectivity. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution:Volume 2: Ideas in the Sciences. MIT Press, 1987.
[8] G. Gigerenzer. The Empire of Chance: how probability changed science and everyday life. Cambridge University Press, 1989.
[9] R. M. Henig. A Monk and Two Peas: The Story of Gregor Mendel and the Discovery of Genetics. Phoenix, 2001.
[10] G. Jorland. The Saint Petersburg Paradox 1713 – 1937. In L. Kruger, L. J. Daston, M. Heidelberger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[11] L. Kruger. The Slow Rise of Probibalism. In L. Kruger, L. J. Daston, and M. Heidelberger, editors, The Probabilistic Revolution: Volume 1: Ideas in History. MIT Press, 1987.
[12] R. J. Leonard. Creating a context for game theory. In E. R. Weintraub, editor, Toward a History of Game Theory, pages 29–76. Duke University Press, 1992.
[13] P. Mirowski. More Heat than Light: Economics as Social Physics, Physics as Nature’s Economics. Cambridge University Press, 1989.
[14] P. Mirowski. What were von Neumannn and Morgenstern trying to accomplish?. In E. R. Weintraub, editor, Toward a History of Game Theory, pages 113–150. Duke University Press, 1992.
[15] A. Oberschall. The Two Empirical Roots of Social Theory and the Probability Revolution. In L. Kruger, G. Gigerenzer, and M. S. Morgan, editors, The Probabilistic Revolution: Volume 2: Ideas in the Sciences. MIT Press, 1987.
[16] O. B. Sheynin. Lies, damned lies and statistics. NTM Zeitschrift für Geschichte der Wissenschaften, Technik und Medizin, 11(3):191–193, 2001.