以前にChainerを使って、AlphaGoのSL policy networkを定義しましたが、Caffeでも定義してみました。
Caffeでのモデル定義は、prototxt形式で記述します。
SL policy networkは以下のように定義できます。
sl_policy_network.prototxt
name: "SLPolicyNetwork" layer { name: "input" type: "MemoryData" top: "input" top: "label" memory_data_param { batch_size: 16 channels: 48 height: 19 width: 19 } } layer { name: "layer1" type: "Convolution" bottom: "input" top: "layer1" convolution_param { num_output: 192 kernel_size: 5 pad: 2 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "layer1" top: "layer1" } layer { name: "layer2" type: "Convolution" bottom: "layer1" top: "layer2" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu2" type: "ReLU" bottom: "layer2" top: "layer2" } layer { name: "layer3" type: "Convolution" bottom: "layer2" top: "layer3" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu3" type: "ReLU" bottom: "layer3" top: "layer3" } layer { name: "layer4" type: "Convolution" bottom: "layer3" top: "layer4" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu4" type: "ReLU" bottom: "layer4" top: "layer4" } layer { name: "layer5" type: "Convolution" bottom: "layer4" top: "layer5" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu5" type: "ReLU" bottom: "layer5" top: "layer5" } layer { name: "layer6" type: "Convolution" bottom: "layer5" top: "layer6" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu6" type: "ReLU" bottom: "layer6" top: "layer6" } layer { name: "layer7" type: "Convolution" bottom: "layer6" top: "layer7" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu7" type: "ReLU" bottom: "layer7" top: "layer7" } layer { name: "layer8" type: "Convolution" bottom: "layer7" top: "layer8" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu8" type: "ReLU" bottom: "layer8" top: "layer8" } layer { name: "layer9" type: "Convolution" bottom: "layer8" top: "layer9" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu9" type: "ReLU" bottom: "layer9" top: "layer9" } layer { name: "layer10" type: "Convolution" bottom: "layer9" top: "layer10" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu10" type: "ReLU" bottom: "layer10" top: "layer10" } layer { name: "layer11" type: "Convolution" bottom: "layer10" top: "layer11" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu11" type: "ReLU" bottom: "layer11" top: "layer11" } layer { name: "layer12" type: "Convolution" bottom: "layer11" top: "layer12" convolution_param { num_output: 192 kernel_size: 3 pad: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu12" type: "ReLU" bottom: "layer12" top: "layer12" } layer { name: "layer13" type: "Convolution" bottom: "layer12" top: "layer13" convolution_param { num_output: 1 kernel_size: 1 pad: 0 bias_term: false weight_filler { type: "xavier" } } } layer { name: "bias13" type: "Bias" bottom: "layer13" top: "layer13" } layer { name: "reshape" type: "Reshape" bottom: "layer13" top: "output" reshape_param { shape { dim: 0 dim: -1 dim: 1 dim: 1 } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "output" bottom: "label" top: "loss" }
入力層は、MemoryDataとしていますが、別の形式でもよいです。
layer1は、5×5のフィルター192枚、pad=2で、活性化関数がReLUの畳み込み、
layer2~12は、3×3のフィルター192枚、pad=1で、活性化関数がReLUの畳み込みです。
最終層は、1×1のフィルター1枚に、位置ごとに異なるバイアスで、softmaxで出力しています。
位置ごとに異なるバイアスは、Chainerでは標準の機能がなかったため自前で定義しましたが、Caffeには、BiasLayerが用意されています。
※7/9追記:ChainerにBias Linkが追加されました。最新のChainerでは標準でBias関数が使用できます。
softmaxに入力する前は、配列を1次元にするためreshapeが必要です。
Chainerと実行速度を比較して速い方で本格的に学習させてみるつもりです。