V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
LittleUqeer
V2EX  ›  TensorFlow

Tensorflow 笔记 用 GoogLeNet 做模式识别

  •  
  •   LittleUqeer · 2017-01-17 16:05:44 +08:00 · 4758 次点击
    这是一个创建于 2848 天前的主题,其中的信息可能已经有所发展或是发生改变。

    GoogLeNet, 2014 年 ILSVRC 挑战赛冠军,这个 model 证明了一件事:用更多的卷积,更深的层次可以得到更好的结构。(当然,它并没有证明浅的层次不能达到这样的效果)

    通过使用 NiN ( Network-in-network )结构拓宽卷积网络的宽度和深度,其中将稀疏矩阵合并成稠密矩阵的方法和路径具有相当的工程价值。

    本帖使用这个 NiN 结构的复合滤波器对 HS300ETF 进行技术分析因子预测。并通过叠加不同指数,尝试寻找‘指数轮动’可能存在的相关关系。

    import zipfile
    import matplotlib.pyplot as plt
    import matplotlib.image as mpimg
    import numpy as np
    import pandas as pd
    fig = zipfile.ZipFile('fig1.zip','r')
    for file in fig.namelist():
        fig.extract(file,'tmp/')
    
    

    1.1 LeNet-5 一种典型的卷积网络是。当年美国大多数银行用它来识别支票上面的手写数字的。

    image = mpimg.imread("tmp/A2.png")
    plt.figure(figsize=(16,16))
    plt.axis("off")
    plt.imshow(image)
    plt.show()
    
    

    1.2 NiN 结构的 Inception module, GoogleLeNet 核心卷积模块,一个拓宽宽度的滤波器 ,相当于一个高度非线性的滤波器

    image = mpimg.imread("tmp/A3.png")
    plt.figure(figsize=(16,46))
    plt.axis("off")
    plt.imshow(image)
    plt.show()
    
    

    1.3 GoogleLeNet 拓扑结构图,可以看到 GoogleLeNet 在 LeNet 网络结构上面大量使用 Inception_unit 滤波器拓宽加深 LeNet 网络

    Going deeper with convolutions 论文中 Inception_unit 滤波器将稀疏矩阵合并成稠密矩阵的方法和路径具有相当的工程价值

    image = mpimg.imread("tmp/A1.png")
    plt.figure(figsize=(16,46))
    plt.axis("off")
    plt.imshow(image)
    plt.show()
    
    

    1.4 GoogleLeNet 拓扑结构代码 在我使用 Tensorflow 复现论文(Going deeper with convolutions)发现 SAME 算法填充( 0 )要比 VALID 效果好一些,很稳定的好一些

    def inception_unit(inputdata, weights, biases):
        # A3 inception 3a
        inception_in = inputdata
        
        # Conv 1x1+S1
        inception_1x1_S1 = tf.nn.conv2d(inception_in, weights['inception_1x1_S1'], strides=[1,1,1,1], padding='SAME')
        inception_1x1_S1 = tf.nn.bias_add(inception_1x1_S1, biases['inception_1x1_S1'])
        inception_1x1_S1 = tf.nn.relu(inception_1x1_S1)
        # Conv 3x3+S1
        inception_3x3_S1_reduce = tf.nn.conv2d(inception_in, weights['inception_3x3_S1_reduce'], strides=[1,1,1,1], padding='SAME')
        inception_3x3_S1_reduce = tf.nn.bias_add(inception_3x3_S1_reduce, biases['inception_3x3_S1_reduce'])
        inception_3x3_S1_reduce = tf.nn.relu(inception_3x3_S1_reduce)
        inception_3x3_S1 = tf.nn.conv2d(inception_3x3_S1_reduce, weights['inception_3x3_S1'], strides=[1,1,1,1], padding='SAME')
        inception_3x3_S1 = tf.nn.bias_add(inception_3x3_S1, biases['inception_3x3_S1'])
        inception_3x3_S1 = tf.nn.relu(inception_3x3_S1)
        # Conv 5x5+S1
        inception_5x5_S1_reduce = tf.nn.conv2d(inception_in, weights['inception_5x5_S1_reduce'], strides=[1,1,1,1], padding='SAME')
        inception_5x5_S1_reduce = tf.nn.bias_add(inception_5x5_S1_reduce, biases['inception_5x5_S1_reduce'])
        inception_5x5_S1_reduce = tf.nn.relu(inception_5x5_S1_reduce)
        inception_5x5_S1 = tf.nn.conv2d(inception_5x5_S1_reduce, weights['inception_5x5_S1'], strides=[1,1,1,1], padding='SAME')
        inception_5x5_S1 = tf.nn.bias_add(inception_5x5_S1, biases['inception_5x5_S1'])
        inception_5x5_S1 = tf.nn.relu(inception_5x5_S1)
        # MaxPool
        inception_MaxPool = tf.nn.max_pool(inception_in, ksize=[1,3,3,1], strides=[1,1,1,1], padding='SAME')
        inception_MaxPool = tf.nn.conv2d(inception_MaxPool, weights['inception_MaxPool'], strides=[1,1,1,1], padding='SAME')
        inception_MaxPool = tf.nn.bias_add(inception_MaxPool, biases['inception_MaxPool'])
        inception_MaxPool = tf.nn.relu(inception_MaxPool)
        # Concat
        #tf.concat(concat_dim, values, name='concat')
        #concat_dim 是 tensor 连接的方向(维度), values 是要连接的 tensor 链表, name 是操作名。 cancat_dim 维度可以不一样,其他维度的尺寸必须一样。
        inception_out = tf.concat(concat_dim=3, values=[inception_1x1_S1, inception_3x3_S1, inception_5x5_S1, inception_MaxPool])
        return inception_out
    ​
    def GoogleLeNet_topological_structure(x, weights, biases, conv_weights_3a, conv_biases_3a, conv_weights_3b, conv_biases_3b ,
                    conv_weights_4a, conv_biases_4a, conv_weights_4b, conv_biases_4b, 
                    conv_weights_4c, conv_biases_4c, conv_weights_4d, conv_biases_4d,
                    conv_weights_4e, conv_biases_4e, conv_weights_5a, conv_biases_5a,
                    conv_weights_5b, conv_biases_5b, dropout=0.8):
        # A0 输入数据
        x = tf.reshape(x,[-1,224,224,4])  # 调整输入数据维度格式
        
        # A1  Conv 7x7_S2
        x = tf.nn.conv2d(x, weights['conv1_7x7_S2'], strides=[1,2,2,1], padding='SAME')
        # 卷积层 卷积核 7*7 扫描步长 2*2 
        x = tf.nn.bias_add(x, biases['conv1_7x7_S2'])
        #print (x.get_shape().as_list())
        # 偏置向量
        x = tf.nn.relu(x)
        # 激活函数
        x = tf.nn.max_pool(x, ksize=pooling['pool1_3x3_S2'], strides=[1,2,2,1], padding='SAME')
        # 池化取最大值
        x = tf.nn.local_response_normalization(x, depth_radius=5/2.0, bias=2.0, alpha=1e-4, beta= 0.75)
        # 局部响应归一化
        
        # A2
        x = tf.nn.conv2d(x, weights['conv2_1x1_S1'], strides=[1,1,1,1], padding='SAME')
        x = tf.nn.bias_add(x, biases['conv2_1x1_S1'])
        x = tf.nn.conv2d(x, weights['conv2_3x3_S1'], strides=[1,1,1,1], padding='SAME')
        x = tf.nn.bias_add(x, biases['conv2_3x3_S1'])
        x = tf.nn.local_response_normalization(x, depth_radius=5/2.0, bias=2.0, alpha=1e-4, beta= 0.75)
        x = tf.nn.max_pool(x, ksize=pooling['pool2_3x3_S2'], strides=[1,2,2,1], padding='SAME')
        
        # inception 3
        inception_3a = inception_unit(inputdata=x, weights=conv_W_3a, biases=conv_B_3a)
        inception_3b = inception_unit(inception_3a, weights=conv_W_3b, biases=conv_B_3b)
        
        # 池化层
        x = inception_3b
        x = tf.nn.max_pool(x, ksize=pooling['pool3_3x3_S2'], strides=[1,2,2,1], padding='SAME' )
        
        # inception 4
        inception_4a = inception_unit(inputdata=x, weights=conv_W_4a, biases=conv_B_4a)
        # 引出第一条分支
        #softmax0 = inception_4a
        inception_4b = inception_unit(inception_4a, weights=conv_W_4b, biases=conv_B_4b)    
        inception_4c = inception_unit(inception_4b, weights=conv_W_4c, biases=conv_B_4c)
        inception_4d = inception_unit(inception_4c, weights=conv_W_4d, biases=conv_B_4d)
        # 引出第二条分支
        #softmax1 = inception_4d
        inception_4e = inception_unit(inception_4d, weights=conv_W_4e, biases=conv_B_4e)
        
        # 池化
        x = inception_4e
        x = tf.nn.max_pool(x, ksize=pooling['pool4_3x3_S2'], strides=[1,2,2,1], padding='SAME' )
        
        # inception 5
        inception_5a = inception_unit(x, weights=conv_W_5a, biases=conv_B_5a)
        inception_5b = inception_unit(inception_5a, weights=conv_W_5b, biases=conv_B_5b)
        softmax2 = inception_5b
       
        # 后连接
        softmax2 = tf.nn.avg_pool(softmax2, ksize=[1,7,7,1], strides=[1,1,1,1], padding='SAME')
        softmax2 = tf.nn.dropout(softmax2, keep_prob=0.4)
        softmax2 = tf.reshape(softmax2, [-1,weights['FC2'].get_shape().as_list()[0]])
        softmax2 = tf.nn.bias_add(tf.matmul(softmax2,weights['FC2']),biases['FC2'])
        #print(softmax2.get_shape().as_list())
        return softmax2  
    ​
    weights = {
        'conv1_7x7_S2': tf.Variable(tf.random_normal([7,7,4,64])),
        'conv2_1x1_S1': tf.Variable(tf.random_normal([1,1,64,64])),
        'conv2_3x3_S1': tf.Variable(tf.random_normal([3,3,64,192])),
        'FC2': tf.Variable(tf.random_normal([7*7*1024, 3]))
    }
    
    biases = {
        'conv1_7x7_S2': tf.Variable(tf.random_normal([64])),
        'conv2_1x1_S1': tf.Variable(tf.random_normal([64])),
        'conv2_3x3_S1': tf.Variable(tf.random_normal([192])),
        'FC2': tf.Variable(tf.random_normal([3]))
        
    }
    pooling = {
        'pool1_3x3_S2': [1,3,3,1],
        'pool2_3x3_S2': [1,3,3,1],
        'pool3_3x3_S2': [1,3,3,1],
        'pool4_3x3_S2': [1,3,3,1]
    }
    ​
    ​
    ​
    conv_W_3a = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,192,64])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,192,96])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,96,128])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,192,16])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,16,32])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,192,32]))
        
    }
    ​
    conv_B_3a = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([64])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([96])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([128])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([16])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([32])),
        'inception_MaxPool': tf.Variable(tf.random_normal([32]))
    }
    ​
    conv_W_3b = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,256,128])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,256,128])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,128,192])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,256,32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,32,96])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,256,64]))
        
    }
    ​
    conv_B_3b = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([128])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([128])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([192])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([96])),
        'inception_MaxPool': tf.Variable(tf.random_normal([64]))
    }
    ​
    conv_W_4a = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,480,192])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,480,96])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,96,208])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,480,16])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,16,48])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,480,64]))
        
    }
    ​
    conv_B_4a = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([192])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([96])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([208])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([16])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([48])),
        'inception_MaxPool': tf.Variable(tf.random_normal([64]))
    }
    
    ​
    conv_W_4b = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,512,160])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,512,112])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,112,224])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,512,24])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,24,64])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,512,64]))
       
    }
    ​
    conv_B_4b = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([160])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([112])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([224])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([24])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([64])),
        'inception_MaxPool': tf.Variable(tf.random_normal([64]))
    }
    ​
    conv_W_4c = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,512,128])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,512,128])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,128,256])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,512,24])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,24,64])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,512,64]))
        
    }
    ​
    conv_B_4c = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([128])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([128])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([256])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([24])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([64])),
        'inception_MaxPool': tf.Variable(tf.random_normal([64]))
    }
    ​
    conv_W_4d = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,512,112])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,512,144])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,144,288])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,512,32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,32,64])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,512,64]))
        
    }
    ​
    conv_B_4d = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([112])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([144])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([288])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([64])),
        'inception_MaxPool': tf.Variable(tf.random_normal([64]))
    }
    ​
    conv_W_4e = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,528,256])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,528,160])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,160,320])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,528,32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,32,128])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,528,128]))
        
    }
    ​
    conv_B_4e = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([256])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([160])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([320])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([128])),
        'inception_MaxPool': tf.Variable(tf.random_normal([128]))
    }
    ​
    conv_W_5a = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,832,256])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,832,160])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,160,320])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,832,32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,32,128])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,832,128]))
        
    }
    ​
    conv_B_5a = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([256])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([160])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([320])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([32])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([128])),
        'inception_MaxPool': tf.Variable(tf.random_normal([128]))
    }
    
    conv_W_5b = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([1,1,832,384])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([1,1,832,192])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([1,1,192,384])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([1,1,832,48])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([5,5,48,128])),
        'inception_MaxPool': tf.Variable(tf.random_normal([1,1,832,128]))
        
    }
    ​
    conv_B_5b = {
        'inception_1x1_S1': tf.Variable(tf.random_normal([384])),
        'inception_3x3_S1_reduce': tf.Variable(tf.random_normal([192])),
        'inception_3x3_S1': tf.Variable(tf.random_normal([384])),
        'inception_5x5_S1_reduce': tf.Variable(tf.random_normal([48])),
        'inception_5x5_S1': tf.Variable(tf.random_normal([128])),
        'inception_MaxPool': tf.Variable(tf.random_normal([128]))
    }
    
    

    2. HS300 技术分析指标数据图像表示

    2.1 处理技术分析指标 生成时间序列的多因子数据,本帖使用前 56 天数据预测后 14 天涨跌,下数据处理

    import datetime
    import lib.TAFPL as TAF
    
    HS300 = pd.read_csv('NHS300.csv')
    del HS300['Unnamed: 0']
    HS300 = TAF.Technical_Analysis_Factor_Normalization(inputdata= HS300, rolling=16*40, Tdropna= True)
    
    Dtmp = pd.read_csv('NDHS300.csv')
    del Dtmp['Unnamed: 0']
    
    # 预测 14 天涨跌
    Dtmp['actual_future_rate_of_return'] = Dtmp.closePrice.shift(-14)/Dtmp.closePrice - 1.0
    Dtmp = Dtmp.dropna()
    Dtmp = Dtmp[-200:]
    Dtmp['Direction_Label'] = 0
    Dtmp.actual_future_rate_of_return.describe()
    Dtmp.loc[Dtmp.actual_future_rate_of_return>0.025,'Direction_Label'] = 1
    Dtmp.loc[Dtmp.actual_future_rate_of_return<-0.01,'Direction_Label'] = -1
    Dtmp.reset_index(drop= True , inplace= True)
    
    start = Dtmp.tradingPeriod.values[0]
    end = Dtmp.tradingPeriod.values[-1]
    end = datetime.datetime.strptime(end,'%Y-%m-%d') # 将 STR 转换为 datetime
    end = end + datetime.timedelta(days=1) # 增加一天
    end = end.strftime('%Y-%m-%d')
    
    fac_HS300 = HS300.ix[(HS300.tradingPeriod.values>start) & (HS300.tradingPeriod.values<end)].reset_index(drop=True)
    fac_list = TAF.get_Multifactor_list(fac_HS300)
    fac_list = fac_list[:56]
    
    fe = 56 # 回溯日期
    tmp_HS300 = np.zeros((1,fe*16*56))
    for i in np.arange(fe,int(len(fac_HS300)/16)):
        tmp = fac_HS300.ix[16*(i-fe):16*i-1][fac_list]
        tmp = np.array(tmp).ravel(order='C').transpose()
        tmp_HS300 = np.vstack((tmp_HS300,tmp))
    tmp_HS300 = np.delete(tmp_HS300,0,axis=0)
    
    

    ('Number of data losses', 726, 'Ratio : 0.0000000')

    2.2 直观图像 取某个交易日技术分析指标参数合成图片像素数据 CNN 一般用来设计机器视觉,简单说就是专门处理图像和视频的,下图为按照 CV 观点来看输入的多因子数据 因为前面技术分析因子进行标准化(归一化处理),这里对因子数据进行缩放和偏置

    import matplotlib.pyplot as plt
    import matplotlib.image as mping
    plt.figure(figsize=(8,8))
    shpig = tmp_HS300[1]
    shpig = shpig.reshape(224,224)
    shpig +=4
    shpig *=26
    plt.axis("off")
    plt.imshow(shpig)
    plt.show()
    
    
    from mpl_toolkits.mplot3d import axes3d
    import matplotlib.pyplot as plt
    x = range(shpig.shape[0])
    y = range(shpig.shape[1])
    xm, ym = np.meshgrid(x,y)
    fig = plt.figure(figsize=(12,12))
    ax = fig.gca(projection='3d')
    ax.plot_wireframe(xm, ym, shpig, rstride=10, cstride=0)
    plt.show()
    
    
    from mpl_toolkits.mplot3d import Axes3D
    from matplotlib import cbook
    from matplotlib import cm
    from matplotlib.colors import LightSource
    import matplotlib.pyplot as plt
    x = xm
    y = ym
    z = shpig
    
    
    fig,ax = plt.subplots(figsize=(15,15),subplot_kw=dict(projection='3d'))
    ls = LightSource(270, 45)
    rgb = ls.shade(z, cmap=cm.gist_earth, vert_exag=0.1, blend_mode='soft')
    
    surf = ax.plot_surface(x, y, z, rstride=1, cstride=1, facecolors=rgb,
                           linewidth=0, antialiased=False, shade=False)
    plt.show()
    
    

    2.3 使用上证指数、中证 500 、创业板指 叠加 HS300 在做技术分析的时候,通常行情和单只股票走势,采用叠加噪音的方式探索是否几个指数存在可量化的关系 这里采用多图叠加的方法添加噪音进行探索,参考 RGB 3 基色合成彩色图片(字数限制)

    本帖设定使用几个指数线的 56 天技术分析因子作为训练数据,使用 CNN 网络卷积进行提取特征,将 HS300 指数按照未来 14 天涨跌情况分成 下跌 -1 平稳 0 上涨 1 ,使用卷积提取特征**,这一部分因为字数原因省略了。

    V2EX 不能贴图,而且有字数限制,看原文中的逻辑图可能会更加清晰一点,原文地址: https://uqer.io/community/share/58777d8289e3ba004defe973

    enenaaa
        1
    enenaaa  
       2017-01-17 17:09:16 +08:00
    试图用 AI 去分析股票的, 都是胡说八道。
    ryd994
        2
    ryd994  
       2017-01-17 18:11:48 +08:00 via Android
    @enenaaa 高频交易还是有搞头的
    但是预测 14 天后的………
    我觉得还是预测下后天的天气更靠谱
    piokhj
        3
    piokhj  
       2017-01-18 10:12:00 +08:00
    嗯,得喂点 RMB 上去测试下
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   实用小工具   ·   5367 人在线   最高记录 6679   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 23ms · UTC 05:47 · PVG 13:47 · LAX 21:47 · JFK 00:47
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.