分类:Surprised by the Gambler’s and Hot Hand Fallacies

来自Big Physics
Jinshanw讨论 | 贡献2019年1月9日 (三) 12:44的版本 →‎参考文献
(差异) ←上一版本 | 最后版本 (差异) | 下一版本→ (差异)


Miller, Joshua Benjamin and Sanjurjo, Adam, Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers (November 15, 2016). IGIER Working Paper No. 552. Available at SSRN: https://ssrn.com/abstract=2627354 or http://dx.doi.org/10.2139/ssrn.2627354

Abstract

We prove that a subtle but substantial bias exists in a standard measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data. The magnitude of this novel form of selection bias generally decreases as the sequence gets longer, but increases in streak length, and remains substantial for a range of sequence lengths often used in empirical work. The bias has important implications for the literature that investigates incorrect beliefs in sequential decision making---most notably the Hot Hand Fallacy and the Gambler's Fallacy. Upon correcting for the bias, the conclusions of prominent studies in the hot hand fallacy literature are reversed. The bias also provides a novel structural explanation for how belief in the law of small numbers can persist in the face of experience.

总结和评论

这篇李克强推荐给我神奇的文章讨论了这样一件事情:当我们重复扔上[math]\displaystyle{ N }[/math]次硬币的时候,我们做下面的一个记录——如果我们遇到了一个正面(H)就把下次的观测值记录下来;接着,在这个观测记录中,我们来计算正面的比例[math]\displaystyle{ p^{H}_{1} }[/math],并且看这个比例是否接近硬币的内在概率[math]\displaystyle{ q^{H} }[/math].

这个例子还可以推广成为连续观测到[math]\displaystyle{ k }[/math]次以后开始记录,再来计算[math]\displaystyle{ p^{H}_{k} }[/math],然后和[math]\displaystyle{ q^{H} }[/math]比较。这里为了简单计,我们用[math]\displaystyle{ k=1 }[/math]

这个问题的背景是手热效益:是否连续投球成功以后成功率变高,或者反过来叫做赌徒的谬误:是否连续出现正面之后正面的概率变小。当然,实际问题中,前者更复杂一些,因为有可能确实会出现打出上风球于是球场气氛得到了改变,从而有可能改变了投球成功率。之前有理论和实际研究[1]表明理论上[math]\displaystyle{ p^{H}_{k}=q^{H} }[/math],并且在篮球实际统计结果中确实不存在手热效益。

本文对之前的研究提出了挑战,认为:理论上[math]\displaystyle{ p^{H}_{k}\neq q^{H} }[/math],并且实际上篮球实际统计结果中存在手热效益。

初看起来,如果这个结果是正确的,那么,不仅否定了之前文章的结果,还会对理论造成冲击:[math]\displaystyle{ p^{H}_{k} }[/math]不过就是一个条件概率,怎么会在独立事件(扔硬币)的条件下,不等于[math]\displaystyle{ q^{H} }[/math]?看起来实在太神奇了,意义太非凡了,也太不可能是正确的了!

仔细读了这篇文章[2]以后,我发现,实际上,是统计方式的问题。当我们问“在这个观测记录中,正面的比例是多少”的时候,问题是没有清楚的定义的,存在两种理解。第一种,在一轮实验记录中——一轮的意思是[math]\displaystyle{ N }[/math]次结果的一个序列[math]\displaystyle{ x_{1}, x_{2}, \cdots, x_{N} }[/math],做上面的规定好的统计。第二种,在很多很多轮的结果的集合中,也就是把一大堆[math]\displaystyle{ \left\{x_{1}, x_{2}, \cdots, x_{N}\right\} }[/math]中来做上面规定好的统计。两者的答案可能是不一样的。以[math]\displaystyle{ k=1 }[/math]为例,前者相当于在单次结果上来计算这个比例[math]\displaystyle{ \frac{HH}{HH+HT} }[/math]。如果后面还把这样的很多次结果做一个平均的话,实际上相当于计算[math]\displaystyle{ \left\langle\frac{HH}{HH+HT}\right\rangle }[/math]。后者相当于直接计算统计平均[math]\displaystyle{ \frac{\left\langle HH\right\rangle}{\left\langle HH+HT \right\rangle} }[/math]

也就是说第一种定义是[math]\displaystyle{ p^{Sample}\left(H\right)_{1}=\frac{1}{\sum_{s \in S^{*}}}\sum_{s\in S^{*}}\frac{\sum_{j} x^{s}_{j}x^{s}_{j+1}=HH}{\sum_{j} x^{s}_{j}x^{s}_{j+1}=HH,HT} }[/math][math]\displaystyle{ S^{*} }[/math]是为了把那些不产生记录,于是分母会变成0的样本剔除掉。

通常的条件概率,[math]\displaystyle{ q^{H}=P\left(x_{j+1}=H|x_{j}=H\right)=\frac{\sum_{s} x^{s}_{j}x^{s}_{j+1}=HH}{\sum_{s}x^{s}_{j}x^{s}_{j+1}=HH,HT} }[/math]是固定[math]\displaystyle{ j }[/math]并且对于大样本求和的,也就是第二种意义下的计算,并且[math]\displaystyle{ j }[/math]是一个固定值。因此,出现第一种计算的结果理论上就不同,并不奇怪。甚至第二种计算,也不是和这个通常的条件概率是一样的。

在第二种计算下,我们关心[math]\displaystyle{ p^{Ensemble}\left(H\right)_{1}=\frac{\sum_{s,j} x^{s}_{j}x^{s}_{j+1}=HH}{\sum_{s,j} x^{s}_{j}x^{s}_{j+1}=HH,HT} }[/math]。不过,由于在这个情形下,每一个具体的[math]\displaystyle{ j }[/math]的情况下,这个比例都是[math]\displaystyle{ q^{H} }[/math],于是算出来的比例仍然是[math]\displaystyle{ q^{H} }[/math]

现在,搞清楚了理论上不同的值出现的情形和这些个情形的意义,那么,实际情况是如何做统计的呢?

一定程度上,如果统计是每一场球独立的结果计算,然后按照这个每一场球的结果来算整体的平均,那么,确实更加接近情形一,也就是理论上不等于[math]\displaystyle{ q^{H} }[/math]的情况。而对于观众来说,有可能确实是先从单场感受到这个手热的比例,然后,根据自己看的所有比赛的情况,在内心感受到了这些比例的平均。对于球员来说,可能也有这个现象。对于分析师来说,就没有了,他们会做基于所有场次的数据的分析。

因此,实际上,这篇文章的意义在于,揭示有些时候在某些场景下人的感受不是能够用某个数学上的定义来描述的,而有可能需要构造一个不同的定义,而不是作者所宣称的找到了一个普适的基于严格数学定义的条件概率的现象,并且以此为基础说明了gambler's fallacy的合理性。

如果把所有的结果放在一起再来统计,那么,就更加接近情形二,也就是理论上等于[math]\displaystyle{ q^{H} }[/math]的情况。

也就是说,这个结果仅仅是统计方式的差别带来的:如果人们讨论手热效益是每一场球的平均,那么,就应该用[2]的计算;如果人们讨论手热效益实际上是大量的不同场的球的整体合起来的感觉的平均,则就应该用[1]的计算。

更深层次的原因:实际上,统计学永远考虑的是系综平均,而不是对样本平均:对于只扔一次就算一个样本,我们需要整体样本空间通过重复一样地来扔很多很多次来产生;对于某种顺序或者方式扔N轮算一个样本,我们还是需要重复这样的N轮很多很多次来产生样本空间。因此,让N变成无穷大的极限是没有统计学意义的,只有让系综里面系统的个数,也就是S变成无穷大才是统计学极限的意义。实际上,从概率理论上说,[math]\displaystyle{ p^{Sample}\left(H\right)_{1} }[/math]没有意义。足见对概念的正确理解是多么重要啊。当然实际上人们是如何来估计对热手效益的理解的,那是另一回事。

回到这个工作的意义:文章宣称他们的这个概率解释了为什么赌徒的谬误之类的直觉现象是有道理的。这是完全错误的。在赌徒的谬误的情况下,所对应着的计算应该是固定j之后的系综平均,也就是标准数学条件概率,因此,完全就应该是理论上的数学上的正确答案,而不是他们定义的条件概率。他们定义的条件概率仅仅会在按照他们的统计的情况下出现:先按照序列来计算比例,然后计算这个比例的系综平均。我把下一步工作的标题都取好了: hothand effect and gamblers' fallacy,distinguishing intuitive and mathematical definitions is the key.不过,后半部分的结论还得看实际数据按照两种计算得到的结果。

Short summary in English

  1. Given a coin with probability being head (H) is predetermined as [math]\displaystyle{ q^{H} }[/math]
  2. The usual mathematical conditional probability is defined as [math]\displaystyle{ P\left(x_{j+1}=H|x_{j}=H\right)=\frac{\sum_{s} x^{s}_{j}x^{s}_{j+1}=HH}{\sum_{s}x^{s}_{j}x^{s}_{j+1}=HH,HT} }[/math], where [math]\displaystyle{ j }[/math] is fixed and we have [math]\displaystyle{ P\left(x_{j+1}=H|x_{j}=H\right)=q^{H} }[/math]. The key here is that [math]\displaystyle{ \sum_{s} }[/math] is performed all over the ensemble of sequences, but not on each sequence.
  3. Another definition can be [math]\displaystyle{ p^{Ensemble}\left(H\right)_{1}=\frac{\sum_{s,j} x^{s}_{j}x^{s}_{j+1}=HH}{\sum_{s,j} x^{s}_{j}x^{s}_{j+1}=HH,HT} }[/math]. It can be proved that this definition leads to the same value with the above one.
  4. [2] defines [math]\displaystyle{ p^{Sample}\left(H\right)_{1}=\frac{1}{\sum_{s \in S^{*}}}\sum_{s\in S^{*}}\frac{\sum_{j} x^{s}_{j}x^{s}_{j+1}=HH}{\sum_{j} x^{s}_{j}x^{s}_{j+1}=HH,HT} }[/math], number of HH and HT are calculated on a single sequence first and do an average over the whole ensemble of sequences. Here [math]\displaystyle{ S^{*} }[/math] refers to the set of sequences where [math]\displaystyle{ \sum_{j} x^{s}_{j}x^{s}_{j+1}=HH,HT \gt 0 }[/math] to avoid [math]\displaystyle{ \frac{0}{0} }[/math].
  5. It is not clear to me that which definition is used in [1].
  6. Which one should be used in reality when people are talking about hot-hand effect? If averaging each game first and them do another average of a set of games with the previous average, then [math]\displaystyle{ p^{Sample}\left(H\right)_{1} }[/math] should be used. If otherwise records from all games are collected together first, then [math]\displaystyle{ p^{Ensemble}\left(H\right)_{1} }[/math] should be used.

下一步工作

把两种计算在原文的数据,或者更多的NBA数据[3][4][5][6][7]上,都实现一下,然后,跟这两篇文章的结果做一个对比。这样这个问题就完全澄清了。

The above explanation conceptually distinguishes the usual mathematical [math]\displaystyle{ P\left(x_{j+1}=H|x_{j}=H\right) }[/math], [math]\displaystyle{ p^{Ensemble}\left(H\right)_{1} }[/math] and [math]\displaystyle{ p^{Sample}\left(H\right)_{1} }[/math]. It is clear and satisfying to me already. However, in order to indeed have a complete picture and provide an end-of-story answer to the original question about the hot-hand effect, one should go and collect the same data or a large data set and apply both [math]\displaystyle{ p^{Ensemble}\left(H\right)_{1} }[/math] and [math]\displaystyle{ p^{Sample}\left(H\right)_{1} }[/math], and furthermore compare the results against those in [2] and [1].

相关工作

[8] [9]

参考文献

  1. 1.0 1.1 1.2 1.3 Gilovich, T., R. Vallone, and A. Tversky (1985): “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Cognitive Psychology, 17, 295–314.
  2. 2.0 2.1 2.2 2.3 Miller, Joshua Benjamin and Sanjurjo, Adam, Surprised by the Gambler's and Hot Hand Fallacies? A Truth in the Law of Small Numbers (November 15, 2016). IGIER Working Paper No. 552. Available at SSRN: https://ssrn.com/abstract=2627354 or http://dx.doi.org/10.2139/ssrn.2627354
  3. James Piette,Sathyanarayan Anand, Kai Zhang, Scoring and Shooting Abilities of NBA Players
  4. https://github.com/rajshah4/BasketballData/tree/master/2016.NBA.Raw.SportVU.Game.Logs
  5. https://www.kaggle.com/wh0801/NBA-16-17-regular-season-shot-log,https://www.kaggle.com/dansbecker/nba-shot-logs
  6. https://www.mysportsfeeds.com
  7. Csapo P, Raab M (2014) “Hand down, Man down.” Analysis of Defensive Adjustments in Response to the Hot Hand in Basketball Using Novel Defense Metrics. PLoS ONE 9(12): e114184. https://doi.org/10.1371/journal.pone.0114184
  8. Joshua B. Miller and Adam Sanjurjo. A Cold Shower for the Hot Hand Fallacy
  9. Andrew Bocskocsky, John Ezekowitz, and Carolyn Stein. The Hot Hand: A New Approach to an Old “Fallacy”

附件:程序

PEnPIn.py
# http://www.bigphysics.org/index.php/%E5%88%86%E7%B1%BB:Markov%E8%BF%87%E7%A8%8B%E7%9A%84%E9%98%B6%E5%92%8C%E8%BD%AC%E7%A7%BB%E7%9F%A9%E9%98%B5%E7%9A%84%E8%AE%A1%E7%AE%97
#1-step Markovian tranfer matrix is defined as p1|0><0| + (1-p1)|1><0|+ (1-p2)|0><1| + p2|1><1| = [p1, 1-p2 \\1-p1, p2] 
#2-step Markovian tranfer matrix is defined as q00|0><00| + (1-q00)|1><00|+ q01|0><01| + (1-q01)|1><01| + q10|0><10| + (1-q10)|1><01|+ q11|0><11| + (1-q11)|1><11|= [q00, q01, q10, q11\\1-q00, 1-q01, 1-q10, 1-q11], |r_{t}><r_{t-1}r_{t-2}|

import sys, getopt
import random, math
import numpy as np
import xlrd

def GenerateSingleStep(r1, p00, p11): #generate a sequence of shots, given p0-->0 and p1-->1
	x=random.uniform(0,1) #can be replaced with a better and specialized random number generator
	if r1==0:   #transfer matrix 
		if x<p00:   #M_{00}
			r=0
		else:       #M_{01}
			r=1
	if r1==1:       
		if x<p11:   #M_{11}
			r=1
		else:      #M_{10}
			r=0
	return r

def GenerateSequence(p00, p11, L, T, RunSymbol, EndSymbol):
	playerShots=list()
	for trial in range(L):
		r1=GenerateSingleStep(0, p00, p11)
		playerShots.append(r1)
		for t in range(T-1):
			r=GenerateSingleStep(r1, p00, p11)
			playerShots.append(r)
			r1=r
		playerShots.append(RunSymbol)	
	playerShots.append(EndSymbol)	
	return playerShots

def testPEnPIn(L, T, RunSymbol, EndSymbol):
	for P00 in range(10):
		for P11 in range(10):
			p00=1.0/(1.0*(P00+2))
			p11=1.0/(1.0*(P11+2))
			playerShots=GenerateSequence(p00, p11, L, T, RunSymbol, EndSymbol)
			[MEn, DeltaMEn, MIn, DeltaMIn, p]=PEnPIn(playerShots, RunSymbol, EndSymbol)
			if (p is None or MEn is None or MIn is None):
				print("%3d, %3d, %5.3f, %5.3f, NaN" %(L, T, p00, p11))
			else:
				print("%3d, %3d, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f" %(L, T, p00, p11, MEn[0,0], MEn[1,1], DeltaMEn[0,0], DeltaMEn[1,1], MIn[0,0], MIn[1,1], DeltaMIn[0,0], DeltaMIn[1,1]))
		

def PEnPIn(playerShots, RunSymbol, EndSymbol):
	MEn=None;MIn=None;DeltaMEn=None;DeltaMIn=None;p=None
	
	P1=0;Q1=0 #final distribution		
	P1ensemble = 0; Q1ensemble = 0 #number of 00 and 01 in the records of all time sequences
	P2ensemble = 0; Q2ensemble = 0 #number of 10 and 11 in the records of all time sequences
	records0=0; records1=0 #number of records of 0x and 1x in all time eqquences
	p1ensemble=0; p2ensemble=0 #final result of matrix over the whole ensemble
	p1sample=0; p2sample=0 #final result of matrix per trajectory first and then over all the trajectories
	P1sample=0; Q1sample=0; P2sample=0; Q2sample=0 #statistics per trajectory, start the next trajectory
	DeltaP1sample=0.0;DeltaP2sample=0.0  #initial value of error bar is set to be 0.0, the lowest limit
	DeltaP1ensemble=0.0;DeltaP2ensemble=0.0 #initial value of error bar is set to be 0.0, the lowest limit
	
	r1=playerShots[0]
	i=1
	while i < len(playerShots):
		r=playerShots[i] #take the next value from playerShots
		if r==EndSymbol:      #as an indicator of EngSymbol, the end of all records, thus time to calculate trajectory average
			if(records0>0):
				p1sample=p1sample/(1.0*records0)  #p00 of matrix [p00, 1-p11 \\1-p00, p11] calculated for each sample sequences, then averaged over all the sequences
				DeltaP1sample=DeltaP1sample/(1.0*records0) #final error bar of p00
			else:     #when there is no record of 0x at all
				p1sample=None
				DeltaP1sample=None
			if(records1>0):
				p2sample=p2sample/(1.0*records1)  #p11 of matrix [p00, 1-p11 \\1-p00, p11] calculated for each sample sequences, then averaged over all the sequences
				DeltaP2sample=DeltaP2sample/(1.0*records1) #final error bar of p11
			else:     #when there is no record of 1x at all
				p2sample=None
				DeltaP2sample=None
			if(1.0*P1ensemble+1.0*Q1ensemble>0): #to avoid 0/0
				p1ensemble=1.0*P1ensemble/(1.0*P1ensemble+1.0*Q1ensemble)       #p00 of matrix [p00, 1-p11 \\1-p00, p11] directly calculated over all samples, thus from the whole ensemble
				DeltaP1ensemble=math.sqrt(p1ensemble*(1.0-p1ensemble)/(1.0*P1ensemble+1.0*Q1ensemble))  #error bar of p00
			else:     #when there is no record of 0x at all in the whole ensemble
				p1ensemble=None
				DeltaP1ensemble=None
			if(1.0*P2ensemble+1.0*Q2ensemble>0): #to avoid 0/0
				p2ensemble=1.0*P2ensemble/(1.0*P2ensemble+1.0*Q2ensemble)       #p11 of matrix [p00, 1-p11 \\1-p00, p11] directly calculated over all samples, thus from the whole ensemble
				DeltaP2ensemble=math.sqrt(p2ensemble*(1.0-p2ensemble)/(1.0*P2ensemble+1.0*Q2ensemble))  #error bar of p11
			else:     #when there is no record of 0x at all in the whole ensemble
				p2ensemble=None
				DeltaP2ensemble=None	
			if(p1ensemble!=None and p2ensemble!=None):
				MEn=np.array([[p1ensemble, 1.0-p2ensemble],[1.0-p1ensemble, p2ensemble]])
				DeltaMEn=np.array([[DeltaP1ensemble, DeltaP2ensemble],[DeltaP1ensemble, DeltaP2ensemble]])
			if(p1sample!=None and p2sample!=None):	
				MIn=np.array([[p1sample, 1.0-p2sample],[1.0-p1sample, p2sample]])
				DeltaMIn=np.array([[DeltaP1sample, DeltaP2sample],[DeltaP1sample, DeltaP2sample]])
			if(1.0*P1+1.0*Q1>0): #to avoid 0/0
				p=1.0*P1/(1.0*P1+1.0*Q1)            
			return [MEn, DeltaMEn, MIn, DeltaMIn, p] #transfer matrix from ensemble, its error bar, transfer matrix from trajectory, its error bar, and also final distribution
	
		if r==RunSymbol:      #as an indicator of RunSymbol, the end of a single trajectory
			if P1sample>0 or Q1sample>0: #in the case of no records generated this round, one need this condition to avoid 0/0
				records0=records0+1
				p1Run=1.0*P1sample/(1.0*P1sample+1.0*Q1sample) #p00 of matrix [p00, 1-p11 \\1-p00, p11] within a trajectory
				DeltaP1Run=math.sqrt(p1Run*(1.0-p1Run)/(1.0*P1sample+1.0*Q1sample)) #error bar of p00
				p1sample=p1sample+p1Run
				DeltaP1sample=DeltaP1sample+DeltaP1Run	
			if P2sample>0 or Q2sample>0: #in the case of no records generated this round, one need this condition to avoid 0/0
				records1=records1+1
				p2Run=1.0*P2sample/(1.0*P2sample+1.0*Q2sample) #p00 of matrix [p00, 1-p11 \\1-p00, p11] within a trajectory
				DeltaP2Run=math.sqrt(p2Run*(1.0-p2Run)/(1.0*P2sample+1.0*Q2sample)) #error bar of p00
				p2sample=p2sample+p2Run
				DeltaP2sample=DeltaP2sample+DeltaP2Run		
			P1sample=0; Q1sample=0; P2sample=0; Q2sample=0 #statistics per trajectory, start the next trajectory
			r1=playerShots[i+1]
			if(r1!=EndSymbol):   #if r1!=EndSymbol, then start the trajectory again from playerShots[i+1], thus playerShots[i+2] will be the next r
				i=i+2
			else:           #if r1==EndSymbol, then the program should end at the next step by enountering r=playerShots[i+1]==EngSymbol
				i=i+1
		else:
			if r1==0:                     #in the case of r1==0
				if r==0:                  #0--->0
					P1sample=P1sample+1 #statistics per trajectory 
					P1ensemble=P1ensemble+1 #statistics accumuated throughout the whole ensemble
					P1=P1+1
				else:                     #0--->1
					Q1sample=Q1sample+1 #statistics per trajectory 
					Q1ensemble=Q1ensemble+1 #statistics accumuated throughout the whole ensemble
					Q1=Q1+1
			else:                         #in the case of r1==1
				if r==0:                  #1--->0 
					Q2sample=Q2sample+1 #statistics per trajectory 
					Q2ensemble=Q2ensemble+1 #statistics accumuated throughout the whole ensemble
					P1=P1+1
				else:                     #1--->1
					P2sample=P2sample+1 #statistics per trajectory
					P2ensemble=P2ensemble+1 #statistics accumuated throughout the whole ensemble
					Q1=Q1+1
			r1=r
			i=i+1	


	
def main(argv):
	#initialize running time parameters
	checkMEnMIn=0 #check the Markovian matrices or not
	data=1     #working on data or simulation
	L=0        #number of trajectories
	T=0        #length of each trajectory
	path=" "   #path of the data file 
	
	#take parameters from command-line input
	try:
		opts, args = getopt.getopt(argv, "hL:T:", ["path=", "check=", "data="])
	except getopt.GetoptError:
		print ("Error: please use the command as PEnPIn.py --path <path> --check <check> -L <L> -T <T> --data ")
		sys.exit(2)	
	for opt, arg in opts:
		if opt == "-h":
			print("PEnPIn.py --path <path>")
			sys.exit()
		elif opt == "--path":
			path = arg
		elif opt == "--check":
			checkMEnMIn = int(arg)
		elif opt == "--data":
			data = int(arg)
		elif opt == "-L":
			L = int(arg)
		elif opt == "-T":
			T = int(arg)
	if path==" ":
		print("we must have an excel file of players' shot logs")
		return
	#parameter input ends here
	RunSymbol=6 #end of a single trajectory run
	EndSymbol=9 #end of the whole ensemble record
	
	#This part is only to test the PEnPIn code
	if(L>0 and T>0):                   
		testPEnPIn(L, T, RunSymbol, EndSymbol)
	if(data):
		#Start to process the empirical players' shot log
		book = xlrd.open_workbook(path)   #open excel file
		sheet = book.sheet_by_index(0) 	# get the first worksheet
		for row in range(sheet.nrows-1):    # read a row
			playerID = int(sheet.row_values(row+1)[0])
			playerShots = sheet.row_values(row+1)
			del playerShots[0:3] 
			[MEn, DeltaMEn, MIn, DeltaMIn, p]=PEnPIn(playerShots, RunSymbol, EndSymbol)
			if (p is None or MEn is None or MIn is None):
				print("%3d, NaN" %(playerID))
			else:
				print("%3d, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f, %5.3f" %(playerID, p, MEn[0,0], MEn[0,1], MEn[1,0], MEn[1,1], DeltaMEn[0,0], DeltaMEn[0,1], DeltaMEn[1,0], DeltaMEn[1,1], MIn[0,0], MIn[0,1], MIn[1,0], MIn[1,1], DeltaMIn[0,0], DeltaMIn[0,1], DeltaMIn[1,0], DeltaMIn[1,1],))
			if(checkMEnMIn):
				print("transfer matrix calculated from ensemble:")
				print("[%5.3f, %5.3f \\\\ %5.3f, %5.3f]" %(MEn[0,0], MEn[0,1], MEn[1,0], MEn[1,1]))
				Eensemble,Pensemble=np.linalg.eig(MEn)
				print("Stationary state of the theoretical transfer matrix is:")
				for i in range(2):
					if abs(Eensemble[i]-1.0)<1.0e-3:
						print("[%5.3f, %5.3f]" %(Pensemble[0,i]/(Pensemble[0,i]+Pensemble[1,i]), (Pensemble[1,i]/(Pensemble[0,i]+Pensemble[1,i]))))
				print("Real final distribution is:")
				print("[%5.3f, %5.3f]" %(p, 1.0-p))
				print("transfer matrix calculated from individual trajectories and then average all trjectories:")
				print("[%5.3f, %5.3f \\\\ %5.3f, %5.3f]" %(MIn[0,0], MIn[0,1], MIn[1,0], MIn[1,1]))
				Esample,Psample=np.linalg.eig(MIn)
				print("Stationary state of the theoretical transfer matrix is:")
				for i in range(2):
					if abs(Esample[i]-1.0)<1.0e-3:
						print("[%5.3f, %5.3f]" %(Psample[0,i]/(Psample[0,i]+Psample[1,i]), (Psample[1,i]/(Psample[0,i]+Psample[1,i]))))		

if __name__ == "__main__":
    main(sys.argv[1:])

运行方式

python3 PEnPIn.py --path Players.xlsx --data 1

本分类目前不含有任何页面或媒体文件。